Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 371/1300 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • How can you centre an image within the image control in SQL Server Reporting Services 2008?

    - by Keith
    I have got an image in an image control on a report in SSRS 2008. The image is coming from an external source, and varies in width. I would like to centre the image within the control, but the image control does not have an equivalent of the text box's TextAlign property to allow right/left/center alignment to be done automatically. I have seen methods to dynamically calculate the amount of left padding as a hack to solve this, http://blogs.msdn.com/chrishays/archive/2004/10/27/CenteredImages.aspx, but the solution gives an error in SSRS 2008, perhaps not surprising given the age of the article. Has anybody got a solution to this for SSRS 2008?

    Read the article

  • What can I do to give some more love and disk space to my database on Ubuntu?

    - by Yaron Naveh
    I'm new to linux. I've deployed a db to ubuntu server on amazon and found out I'm low on disk space. did df (see below) - and found out that I'm 89% capacity on one file system, but less on others. What does this mean? Do I have a few partitions and can now utilize others besides /dev/xvda1? Also /dev/xvdb seems large, is it safe to put the db in it and only use it? If so do I need to mount it or do something special? $> df -lah Filesystem Size Used Avail Use% Mounted on /dev/xvda1 8.0G 6.7G 914M 89% / proc 0 0 0 - /proc sysfs 0 0 0 - /sys none 0 0 0 - /sys/fs/fuse/connections none 0 0 0 - /sys/kernel/debug none 0 0 0 - /sys/kernel/security udev 3.7G 8.0K 3.7G 1% /dev devpts 0 0 0 - /dev/pts tmpfs 1.5G 164K 1.5G 1% /run none 5.0M 0 5.0M 0% /run/lock none 3.7G 0 3.7G 0% /run/shm /dev/xvdb 414G 199M 393G 1% /mnt

    Read the article

  • Import SQL dump into MySQL

    - by BryanWheelock
    I'm confused how to import a SQL dump file. I can't seem to import the database without creating the database first in MySQL. This is the error displayed when database_name has not yet been created: username = username of someone with access to the database on the original server. database_name = name of database from the original server $ mysql -u username -p -h localhost database_name < dumpfile.sql Enter password: ERROR 1049 (42000): Unknown database 'database_name' If I log into MySQL as root and create the database, database_name mysql -u root create database database_name; create user username;# same username as the user from the database I got the dump from. grant all privileges on database_name.* to username@"localhost" identified by 'password'; exit mysql then attempt to import the sql dump again: $ mysql -u username -p database_name < dumpfile.sql Enter password: ERROR 1007 (HY000) at line 21: Can't create database 'database_name'; database exists How am I supposed to import the SQL dumpfile?

    Read the article

  • iPhone: Core Data: Updating a pre-filled database in future app versions.

    - by Cuzog
    I am creating an app with a database of information that needs to be pre-filled. This data will change in future versions. In the same database, I also need to store user editable information since that user edited data directly relates to the pre-filled data. My question is, if I'm pre-filling the database by creating a duplicate data model in a second app and copying over the core data file before release, how would I handle updates to that data in future versions of the app without destroying the user's existing data? Do the core data migration methods handle this, or must I write custom methods to programatically handle the merge at first app launch?

    Read the article

  • 12c??? - Active Data Guard Far Sync

    - by Jian Zhang(??)
    ?? ================ Active Data Guard Far Sync?Oracle 12c????(???Far Sync Standby),Far Sync?????????????(Primary Database)?????????Far Sync??,??(Primary Database) ??(synchronous)??redo?Far Sync??,??Far Sync????redo??(asynchronous)???????(Standby Database)???????????????????????Far Sync????????,init?????????,???????? ??redo ????Maximum Availability??,???????????(Primary Database)?????????Far Sync??,??(Primary Database)??(synchronous)??redo?Far Sync??,???????(zero data loss),?????Far Sync????,??????,??????????????Far Sync????redo??(asynchronous)???????(Standby Database)? ??redo ????Maximum Performance??,???????????(Primary Database)?????????Far Sync??,??(Primary Database) ????redo?Far Sync??,??Far Sync???????redo?????????(Standby Database)????????????????(Standby Database)??redo???(offload)? Far Sync????Data Guard ????(role transitions)????,?switchover/failover?????12c????? ???????Data Guard ????,?switchover/failover,???????????????Far Sync??,??Far Sync???????????????????? ???Far Sync???????,??????????????2?Far Sync??,???????? ???????Far Sync????? Far Sync??? ================ ????Far Sync ================ 1. ??Data Guard,???11.2??,??????«Active Database Duplication for A standby database» 2. ????Far Sync??,Far Sync????????,init?????????,???????? ??Far Sync???????,?????: SQL> ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS '/tmp/controlfs01.ctl'; 3. ????redo?????Far Sync??,????LOG_ARCHIVE_DEST_2??: LOG_ARCHIVE_DEST_2='SERVICE=dg12cfs SYNC AFFIRM MAX_FAILURE=1 ALTERNATE=LOG_ARCHIVE_DEST_3 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=dg12cfs' 4. ??Far Sync??????redo???,??Far Sync??LOG_ARCHIVE_DEST_2??: LOG_ARCHIVE_DEST_2='SERVICE=dg12cs ASYNC VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=dg12cs' 5. ????Far Sync???????,??????????????2?Far Sync??? 6. ???????: SQL> select * from  V$DATAGUARD_CONFIG; DB_UNIQUE_NAME       PARENT_DBUN       DEST_ROLE         CURRENT_SCN     CON_ID ------------------------------ ------------------------------     ----------------- ----------- ---------- dg12cfs                        dg12cp          FAR SYNC INSTANCE      682995          0 dg12cs                         dg12cfs         PHYSICAL STANDBY       682995          0 dg12cp                        NONE             PRIMARY DATABASE      683138          0 ????????????????:Oracle_12c_Active_Data_Guard_Far_Sync_v1.pdf

    Read the article

  • Why does clearing an SQLite database not reduce its size?

    - by Thorsten Lorenz
    I have an SQLite database. I created the tables and filled them with a considerable amount of data. Then I cleared the database by deleting and recreating the tables. I confirmed that all the data had been removed and the tables were empty by looking at them using SQLite Administrator. The problem is that the size of the database file (*.db3) remained the same after it had been cleared. This is of course not desirable as I would like to regain the space that was taken up by the data once I clear it. Did anyone make a similar observation and/or know what is going on? What can be done about it?

    Read the article

  • 2 min video about the SQL_Compare

    - by CatherineRussell
    It is nice to start blogging again! I am working on new project in a small company now. We do not have a full time database admin. I have to cover multiple roles: getting requirements, writing docs and creating diagrams, designing app, writing code, testing and DBA role. I am not a DBA. But, I have to do day to day database changes: adding new new columns and tables. Check out 2 min video about the SQL_Compare. This tool saves time by automatically comparing and synchronizing database schemas; eliminate mistakes migrating database changes from dev, to test, to production; speed up the deployment of new database schema updates; generate T-SQL scripts to update one database to match the schema of another; find and fix errors caused by differences between databases;  keeps an accurate history of all previous database records.  http://www.red-gate.com/products/SQL_Compare/index.htm

    Read the article

  • SQL Reporting Services: Why does my report shrink when it's emailed?

    - by Josh Yeager
    I created a simple report and uploaded it to my report server. It looks correct on the report server, but when I set up an email subscription, the report is much narrower than it is supposed to be. Here is what the report looks like in the designer. It looks similar when I view it on the report server: [http://img58.imageshack.us/img58/4893/designqj3.png] Here is what the email looks like: [http://img58.imageshack.us/img58/9297/emailmy8.png] Does anyone know why this is happening?

    Read the article

  • Tomcat reporting 404 error on all of newly deployed WAR files?

    - by dacracot
    I deployed a WAR file into $TOMCAT_HOME/webapps by copying the file into the directory, just like I've done a thousand times before. Tomcat detects the WAR and inflates it. I can traverse the directory tree on my server at the command line (it's Fedora). But when I address the webapp within my client machine's browser, I get nothing but 404 errors. This has happened to the last two deployments of completely separate WARs. The first was a replacement of an existing WAR. I first deleted the WAR and its inflated directory, and then copied in the WAR which inflated... 404. I deleted everything again, put back the previously working WAR from backup. It inflated and worked. The second was a completely new, never before deployed WAR... nothing but 404. Other WARs are working, but now I'm afraid to change anything until I know what is going on. Any clues? Edit: From my comment you can see that the logs included "SEVERE: Error listenerStart" after the WAR was deployed by Tomcat. There were no stack traces or other errors reported.

    Read the article

  • How do I implement a score database in Android?

    - by Michael Seun Araromi
    I making a 2D game for Android using OpenGL-ES technology. It is a space shooting game where the player shoots enemy ships. I want to keep a track of score for the amount of enemy ships destroyed and a record of a local highscore. The score should be incremented whenever an enemy is destroyed. I also want a way of displaying both the current score and highscore on the game screen. I am not familiar with databases at all and I will appreciate a clear answer or a link to a good tutorial for my cause. Thanks.

    Read the article

  • Disaster Recovery Discovery

    - by Rodney Landrum
    Last weekend I joined several of my IT staff on a mission to perform a DR test in our remote CoLo center in a large South East city of the US. Can I be more obtuse? The goal was simple for me as the sole DBA in a throng of Windows, Storage, Network and SAN admins – restore the databases and make them work. There were 4 applications that back ended to 7 SQL Server databases on 4 different SQL Server instances. We would maintain the original server names, but beyond that it was fair game. We had time to prepare so I was able to script out or otherwise automate the recovery process. I used sp_help_revlogin for three of the servers, a bit of a cheat actually because restoring the Master database on the target DR servers was the specified course of action according to the DR procedures ( the caveat “IF REQUIRED” left it open to interpretation. I really wanted to avoid the step of restoring Master for a number of reasons but mainly because I did not want to deal with issues starting SQL Services afterward. Having to account for the location of TempDB and the version conflicts of the resource DBs were just two of the battles I chose not to fight. Not to mention other system database location problems that might arise and prevent SQL from starting.  I was going to have to restore all of the user databases anyway, so I would not really gain any benefit, outside of logins, for taking the time to restore the source Master database over the newly installed one on the fresh server. What I wanted was the ability to restore the Master database as a user database, call it Master_Mine, from a backup on the source system and then use that restored database to script the SQL Logins and passwords on the DR systems. While I did not attempt this on the trip, the thought stuck in my mind and this past week I succeeded at scripting user accounts and passwords using only a restored copy of the Master database. Granted there were several challenges to overcome.  Also, as is usual for any work like this the usual disclaimers apply:  This is not something that I would imagine Microsoft would condone or support and this was really only an experiment for me to learn if it was even possible. While I have tested the process with success, I do not know that I would use this technique in a documented procedure because future updates for SQL Server will render this technique non-functional. I thought at first, incorrectly of course, that I could use sp_help_revlogin on a restored copy of the master database I named Master_Mine.   Since sp_help_revlogin uses system schema objects, sys.syslogins and sys.server_principals, this was not going to work because all results would come from the main Master database. To test this I added a SQL login via SSMS, backed up Master, restored  it as Master_Mine, and then deleted the login.  Even though the test account I created should presumably still be in the Master_Mine database, I should be able to get to it and script out its creation with its password hash so that I would not need to know the password, but any applications that stored that password would not have to be altered in the DR scenario. They would just work as expected. Once I realized that would not work I began looking deeper.  Knowing that sys.syslogins and sys.server_principals are system views, their underlying code should be available with sp_helptext, right? They were. And this led me to discover the two tables sys.sysxlgns and sys.sysprivs, where the data I needed was stored. These tables existed in both the real Master and the restored copy, Master_Mine.  I used this information to tweak the sp_help_revlogin stored procedure to use these tables instead to create the logins cursor used in sp_help_revlogin. For the password hash,  sp_help_revlogin uses the function LoginProperty() which takes a user name and option ‘passwordhash’ to return the hash for the user. Unfortunately, it requires the login to exist in the Master database. This would not work. So another slight modification I had to make was to pull the password hash itself (pwdhash from sys.sysxlgns) into the logins cursor and comment out the section of sp_help_revlogin that uses LoginProperty. Instead, I pass the pwdhash value as the variable @PWD_varbinary to the sp_hexadecimal stored procedure which is also created by and used within the code provided by Microsoft in the link above for sp_help_revlogin. The final challenge: sys.sysxlgns and sys.server_principals are visible only within a Dedicated Administrator Connection (DAC) query window in SSMS or within SQLCDMD.  To open a DAC connection you have to be logged in on the SQL Server itself, via RDP in my case,  and you preface the server name in the query connection with ADMIN:, so that the server connection looks like ADMIN:ServerName. From there you can create the modified stored procedure in the restored copy of a Master database from a source system as whatever name you like, and then run the modified stored procedure. I named my new stored procedure usp_help_revlogin_MyMaster. Upon execution I was happy to see the logins and password hashes that I needed to apply from the source Master database without having to restore over the new Master system database and without the need to access the original server (assuming it was down due to whatever disaster put it in that state). You will note that I am not providing full code samples here of the modifications. I will say that it was a slight bit of work and anyone who needed to do this for whatever reason, could fairly easily roll their own solution with the information provided herein.  My goal, as I said was to prove that this could be done and provide another option if required to ease the burden of getting SQL Servers up and available in an emergency situation where alternatives may be more challenging or otherwise unavailable.  

    Read the article

  • The Fantastic New WebLogic on Oracle Database Appliance 2.9 Release is Here!

    - by JuergenKress
    Last week was a big day in virtualised ODA-land as it saw the launch of WebLogic on ODA 2.9. Admittedly it doesn't sound like a very exciting release but it is one that we at O-box have been looking forward to for quite some time. Let me explain why, then we'll look into the details... The ODA X4-2 has 48 Intel Xeon cores. That is a lot of compute power. Whilst the largest O-box SOA Appliance single environment configuration can in theory use all those cores (currently with 40 vCPU of SOA!) the vast majority of O-box users will want smaller configurations. Prior to 2.9 the Oracle WebLogic implementation only supported one domain per ODA, so the conundrum O-box development faced last year was either: offer customers only one SOA environment on their O-box for now (but have the benefit of a standard, easily supportable WebLogic installation), or build our own WebLogic/OTD OVM templates from scratch. One of our driving goals with O-box is to give the best possible experience and make the appliance as supportable as possible. Therefore we took the gamble that we would stick with the Oracle's one-domain WebLogic configuration initially, and just hope that it would deliver multi-domain support for us in a timely manner (note: this is probably not a strategy that business textbooks would recommend!). Anyway, we've been working closely with Oracle Product Management for a few months now and I'm delighted to see 2.9 as the fruits of their labour. This also neatly ties in with several recent requests for O-box to include OSB as well as SOA/BPEL (which we have always wanted to have in separate domains). The diagram below is the neatest way to summarise what the new 2.9 release will allow us to deliver, i.e. previously only one 3D box was possible: Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: oBox,WebLogic on ODA,ODA,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Keeping a database structure up to date in a project where code is on subversion?

    - by Bruno De Barros
    I have been working with Subversion for a while now, and it's been incredible for the management of my projects, and even to help managing the deployment to several different servers, but there is just the one thing that still annoys me. Whenever I make any changes to the database structure, I need to update every server manually, I have to keep track of any changes I made, and because some of my servers run branches of the project (modifications that are still being worked on, or were made for different purposes), it's a bit awkward. Until now, I've been using a "database.sql" file, which is a dump of the database structure for a specific revision. But it just seems like such a bad way to manage this. And I was wondering, how does everyone else manage their MySQL databases when they're working on a project and using Subversion?

    Read the article

  • Preview result of update/insert query whithout comitting changes to database in MySQL?

    - by Camsoft
    I am writing a script to import CSV files into existing tables within my database. I decided to do the insert/update operations myself using PHP and INSERT/UPDATE statements, and not use MySQL's LOAD INFILE command, I have good reasons for this. What I would like to do is emulate the insert/update operations and display the results to the user, and then give them the option of confirming that this is OK, and then committing the changes to the database. I'm using InnoDB database engine with support for transactions. Not sure if this helps but was thinking down the line of insert/update, query data, display to user, then either commit or rollback transaction? Any advise would be appreciated.

    Read the article

  • How can I split a PHP form into two sections, before submission to a database?

    - by Daniel Smith
    Hi guys, I have set up a PHP form for a competition, for users to enter and all information to be stored in a database. I used the following NetTut+ tutorial to do so: http://tr.im/SwAd. I've got the form submitting to the database as required, but with so many additional questions being asked, I would like to split the form into two separate sections. Obviously the first page would say continue to the next step before the second step allowing for the form to be submitted to the database. The content that the user sees should be split, but should all be a part of the same form. Step 1 Step 2 before submission. Would anyone know of or recommend any methods to do this? I'm a beginner so please be nice. :) Cheers, Daniel

    Read the article

  • How do we know if a query is cache or retrieved from database?

    - by Hadi
    For example: class Product has_many :sales_orders def total_items_deliverable self.sales_orders.each { |so| #sum the total } #give back the value end end class SalesOrder def self.deliverable # return array of sales_orders that are deliverable to customer end end SalesOrder.deliverable #give all sales_orders that are deliverable to customer pa = Product.find(1) pa.sales_orders.deliverable #give all sales_orders whose product_id is 1 and deliverable to customer pa.total_so_deliverable The very point that i'm going to ask is: how many times SalesOrder.deliverable is actually computed, from point 1, 3, and 4, They are computed 3 times that means 3 times access to database so having total_so_deliverable is promoting a fat model, but more database access. Alternatively (in view) i could iterate while displaying the content, so i ends up only accessing the database 2 times instead of 3 times. Any win win solution / best practice to this kind of problem ?

    Read the article

  • How to make Unit Tests to make sure stored procedure is deleting row from the database?

    - by aspdotnetuser
    I'm new to unit testing and I need some help with the following. I have created a small project to help me learn how to make Unit Tests. The functionality for one of the forms in my application deletes a user from the User table (and other rows in mapping tables). Currently, the unit test I have created to test this sets up the required objects and then calls the business rules method (passing in the user id) which calls the data access method to execute the stored procedure that deletes the rows in the tables. Is this the correct method to test whether something is being deleted successfully? Should the unit test / setup method first insert some test data which the unit test then deletes?

    Read the article

  • How can I track the last location of a shipment effeciently using latest date of reporting?

    - by hash
    I need to find the latest location of each cargo item in a consignment. We mostly do this by looking at the route selected for a consignment and then finding the latest (max) time entered against nodes of this route. For example if a route has 5 nodes and we have entered timings against first 3 nodes, then the latest timing (max time) will tell us its location among the 3 nodes. I am really stuck on this query regarding performance issues. Even on few hundred rows, it takes more than 2 minutes. Please suggest how can I improve this query or any alternative approach I should acquire? Note: ATA= Actual Time of Arrival and ATD = Actual Time of Departure SELECT DISTINCT(c.id) as cid,c.ref as cons_ref , c.Name, c.CustRef FROM consignments c INNER JOIN routes r ON c.Route = r.ID INNER JOIN routes_nodes rn ON rn.Route = r.ID INNER JOIN cargo_timing ct ON c.ID=ct.ConsignmentID INNER JOIN (SELECT t.ConsignmentID, Max(t.firstata) as MaxDate FROM cargo_timing t GROUP BY t.ConsignmentID ) as TMax ON TMax.MaxDate=ct.firstata AND TMax.ConsignmentID=c.ID INNER JOIN nodes an ON ct.routenodeid = an.ID INNER JOIN contract cor ON cor.ID = c.Contract WHERE c.Type = 'Road' AND ( c.ATD = 0 AND c.ATA != 0 ) AND (cor.contract_reference in ('Generic','BP001','020-543-912')) ORDER BY c.ref ASC

    Read the article

  • Is a .Net membership database portable, or are accounts somehow bound to the originating Web site or

    - by Deane
    I have an ASP.Net Web site using .Net Membership with a SQL Server provider, so the users and roles are stored in the SQL tables created by Aspnet_regsql.exe. Is this architecture totally self-contained and portable, or are users in it somehow bound to the specific Web site on which they create their account? Put another way, if we create a bunch of users in dev or UAT, the back up and restore this database to another server, accessed under another domain name, should it still work just fine? We're seeing some odd behavior when we move the database, like users losing group affiliation and such, and I'm curious how portable and environment-agnostic this database really is. I have a sneaking suspicion that something is bound to the machine key or the domain.

    Read the article

  • SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

    - by Pinal Dave
    [Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning. Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log. Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup. You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log. Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions. For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again. During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file? During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example. I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How to apply Data Mining (Association Rule) to a huge database ?

    - by stckvrflw
    Hello What I want to do is to apply Association method of data mining on my SQL Server 2000 database. Association rule is something like "finding the most frequent items that appear together in database." For those who don't know or who want to remember what is association method is like, take a look at this presentation about Association rule in Data Mining. www.authorstream.com/Presentation/a.besimi-233030-data-mining-intro-seeu-education-ppt-powerpoint 17th slide gives a nice example of applying association rule on a database. So Can you help me about how should I write my SQL codes (If they will be sufficient of course) Thanks.

    Read the article

  • How to model my database when using entity framework 4?

    - by Junior Ewing
    Trying to wrap my head around the best approach in modelling a database when we are using Entity Framework 4 as the ORM layer. We are going to use asp.net mvc 2 for the application. Is it worth trying to model using the class diagram modeller that comes with Visual Studio 2010 where you graphically configure your models into the EDMX file and then generate out the database structure? I have run into a bunch of non trivial issues and for complex many to many mappings or multi primary key entities the answer is not that obvious even after poking around a while with the tools. I figure its easy at this point to give up and start modelling the DB using real, working DB modelling tools and then try to generate out the EDMX from the database, rather than trying to do the model first approach.

    Read the article

  • What is the proper location for a sqlite3 database file?

    - by Elliot Chen
    Hi, Everyone: I'm using a sqlite3 database to store app's data. Instead of building a database within program, I introduced an existing db file: 'abc.sqlite' into my project and put it under my 'Resources' folder. So, I think this db file should be inside of 'bundle', so at my init function, I used following statement to read it out: NSString *path = [[NSBundle mainBundle] pathForResource:@"abc" ofType:"sqlite"]; if(sqlite3_open([path UTF8String], &database) != SQLITE_OK) ... It's ok that this db can be opened and data can be retrieved from it. BUT, someone told me that it's better to copy this db file into user folder: such as 'Document'. So, my question is: is it ok to use this db from main bundle directly or copy it to user folder then use that copy. Which is better? Thank you very much!

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >