Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 93/1255 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • How to Identify and Backup the Latest SQL Server Database in a Series

    I have to support a third party application that periodically creates a new database on the fly. This obviously causes issues with our backup mechanisms. The databases have a particular pattern for naming, so I can identify the set of databases, however, I need to make sure I'm always backing up the newest one. Read this tip to ensure you are backing up your latest database in a series. Is your SQL Database under Version Control?SSMS plug-in SQL Source Control connects SVN, TFS, Git, Hg and all others to SQL Server. Learn more.

    Read the article

  • Game Database Connectivity Java

    - by The Kraken
    I'm developing a simple multi-player puzzle game in Java. Both players should be able to view the same game board on his own computer. Then, when one player makes an action in the game (ex. drags an object onto a coordinate space), the game's view should update automatically on the other computer's game screen. I'd like all this to happen over the internet, not requiring both computers to be on the same LAN connection. If I need to use SQL/PHP to accomplish this, I'm unsure how to design the database to accomplish something as simple as the following: Player A drags element onscreen Game sends coordinates of element to database/server Player B's computer detects a change to an item in the database Player B's computer grabs the coordinates of Player A's item Player B's machine draws onscreen elements at the received coordinates Could somebody point me in the right direction?

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

  • Installing SharePoint 2010 in one machine with built in database

    - by sreejukg
    It is very easy to deploy SharePoint 2010 in a single server using the built-in database. Normally one need to choose such installation for evaluation purposes. When installing with default settings, setup installs Microsoft SQL server 2008 express database along with SharePoint. After installing SharePoint, you need to run SharePoint products and technology configuration wizard which will install central admin website and creates the configuration database and content database for SharePoint sites. Limitations 1. You can not perform this installation on a domain controller 2. The maximum size for express edition database is 4 GB SharePoint 2010 only supports 64 bit operating systems. The installation steps are for windows server r2 64 bit enterprise edition. Installation steps The first screen for the installation is as follows As a first step you need to install the s/w prerequisites. Click on the corresponding link Click next, here you have to agree on the license terms. Select the checkbox and then click next. The installation will starts. The progress will be updated in the screen. This may take some time as during this process, there are some components needs to be downloaded from internet. Make sure you are connected to the internet, then only the installation will become a success. If any error occurs, it will display the error, you need to configure in order to continue. If everything ok you will receive the following success page. Click finish to exit the installation window. Now from the first screen, select Install SharePoint server. This will install SharePoint and SQL server 2008 express edition. First you need to enter the product key for SharePoint. Enter the product key and clicks continue. Now you need to accept the license agreement. Select the checkbox and click on continue. Select the installation type you want.   Now click on the standalone button. In production scenario, you need to select the server farm installation. This article only cover the first option, installing server farm is not in the scope of this article. Once you click on the standalone, the installation starts and you can view the progress as below. If any error occurred during installation, you will get the details and link to the log file. Refer log file and fix the corresponding issue and then start the installation again. If installation completes without any error, you will see the below screen. Make sure you selected the check box “Run the SharePoint products Configuration Wizard now” and click close. The SharePoint products configuration wizard starts. Click next; you will get the following warning Click yes and the configuration steps starts. You can view the progress for each step. Once completed the below screen appears to the user. Click finish to complete the installation. Now SharePoint installation is completed. You can navigate to SharePoint central administration website from the administrative tools and start building your portal. Good luck

    Read the article

  • Database Mirroring on SQL Server Express Edition

    - by Most Valuable Yak (Rob Volk)
    Like most SQL Server users I'm rather frustrated by Microsoft's insistence on making the really cool features only available in Enterprise Edition.  And it really doesn't help that they changed the licensing for SQL 2012 to be core-based, so now it's like 4 times as expensive!  It almost makes you want to go with Oracle.  That, and a desire to have Larry Ellison do things to your orifices. And since they've introduced Availability Groups, and marked database mirroring as deprecated, you'd think they'd make make mirroring available in all editions.  Alas…they don't…officially anyway.  Thanks to my constant poking around in places I'm not "supposed" to, I've discovered the low-level code that implements database mirroring, and found that it's available in all editions! It turns out that the query processor in all SQL Server editions prepends a simple check before every edition-specific DDL statement: IF CAST(SERVERPROPERTY('Edition') as nvarchar(max)) NOT LIKE '%e%e%e% Edition%' print 'Lame' else print 'Cool' If that statement returns true, it fails. (the print statements are just placeholders)  Go ahead and test it on Standard, Workgroup, and Express editions compared to an Enterprise or Developer edition instance (which support everything). Once again thanks to Argenis Fernandez (b | t) and his awesome sessions on using Sysinternals, I was able to watch the exact process SQL Server performs when setting up a mirror.  Surprisingly, it's not actually implemented in SQL Server!  Some of it is, but that's something of a smokescreen, the real meat of it is simple filesystem primitives. The NTFS filesystem supports links, both hard links and symbolic, so that you can create two entries for the same file in different directories and/or different names.  You can create them using the MKLINK command in a command prompt: mklink /D D:\SkyDrive\Data D:\Data mklink /D D:\SkyDrive\Log D:\Log This creates a symbolic link from my data and log folders to my Skydrive folder.  Any file saved in either location will instantly appear in the other.  And since my Skydrive will be automatically synchronized with the cloud, any changes I make will be copied instantly (depending on my internet bandwidth of course). So what does this have to do with database mirroring?  Well, it seems that the mirroring endpoint that you have to create between mirror and principal servers is really nothing more than a Skydrive link.  Although it doesn't actually use Skydrive, it performs the same function.  So in effect, the following statement: ALTER DATABASE Mir SET PARTNER='TCP://MyOtherServer.domain.com:5022' Is turned into: mklink /D "D:\Data" "\\MyOtherServer.domain.com\5022$" The 5022$ "port" is actually a hidden system directory on the principal and mirror servers. I haven't quite figured out how the log files are included in this, or why you have to SET PARTNER on both principal and mirror servers, except maybe that mklink has to do something special when linking across servers.  I couldn't get the above statement to work correctly, but found that doing mklink to a local Skydrive folder gave me similar functionality. To wrap this up, all you have to do is the following: Install Skydrive on both SQL Servers (principal and mirror) and set the local Skydrive folder (D:\SkyDrive in these examples) On the principal server, run mklink /D on the data and log folders to point to SkyDrive: mklink /D D:\SkyDrive\Data D:\Data On the mirror server, run the complementary linking: mklink /D D:\Data D:\SkyDrive\Data Create your database and make sure the files map to the principal data and log folders (D:\Data and D:\Log) Viola! Your databases are kept in sync on multiple servers! One wrinkle you will encounter is that the mirror server will show the data and log files, but you won't be able to attach them to the mirror SQL instance while they are attached to the principal. I think this is a bug in the Skydrive, but as it turns out that's fine: you can't access a mirror while it's hosted on the principal either.  So you don't quite get automatic failover, but you can attach the files to the mirror if the principal goes offline.  It's also not exactly synchronous, but it's better than nothing, and easier than either replication or log shipping with a lot less latency. I will end this with the obvious "not supported by Microsoft" and "Don't do this in production without an updated resume" spiel that you should by now assume with every one of my blog posts, especially considering the date.

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • i want to access mysql database table on given conditions in drop down menu [on hold]

    - by user3909877
    as the code below is accesing the database table directly but i want it to display the table content on giving conditions in drop down menu like when i select islamabad in one drop down menu and lahore in other as given in code and press search buttonn then it display the table flights.but it is displaying it directly <p class="h2">Quick Search</p> <div class="sb2_opts"> <p> </p> <form method="post" action=""> <p>Enter your source and destination.</p> <p> From:</p> <select name="from"> <option value="Islamabad">Islamabad</option> <option value="Lahore">Lahore</option> <option value="murree">Murree</option> <option value="Muzaffarabad">Muzaffarabad</option> </select> <p> To:</p> <select name="To"> <option value="Islamabad">Islamabad</option> <option value="Lahore">Lahore</option> <option value="murree">Murree</option> <option value="Muzaffarabad">Muzaffarabad</option> </select> <input type="submit" value="search" /> </form> </form> </table> <?php $from = isset($_POST['from'])?$_POST['from']:''; $to = isset($_POST['to'])?$_POST['to']:''; if( $from =='Islamabad'){ if($to == 'Lahore'){ $db_host = 'localhost'; $db_user = 'root'; $database = 'homedb'; $table = 'flights'; if (!mysql_connect($db_host, $db_user)) die("Can't connect to database"); if (!mysql_select_db($database)) die("Can't select database"); $result = mysql_query("SELECT * FROM {$table}"); if (!$result) { die("Query to show fields from table failed"); } $result = mysql_query("SELECT * FROM {$table}"); if (!$result) { die("Query to show fields from table failed"); } $fields_num = mysql_num_fields($result); echo "<h1>Table: {$table}</h1>"; echo "<table border='1'><tr>"; while($row = mysql_fetch_row($result)) { echo "<tr>"; // $row is array... foreach( .. ) puts every element // of $row to $cell variable foreach($row as $cell) echo "<td>$cell</td>"; echo "</tr>\n"; } } } mysqli_close($con); ?>

    Read the article

  • Mongodb vs. Cassandra

    - by ming yeow
    I am evaluating what might be the best migration option. Currently, i am on a sharded mysql (horizontal partition), with most of my data stored in json blobs. I do not have any complex SQL queries( already migrated away after since I partitioned my db) Right now, it seems like both Mongodb and Cassandra would be likely options. My situation lots of reads in every query, less regular writes not worried about "massive" scalability more concerned about simple setup, maintenance and code minimize hardware/server cost

    Read the article

  • Postgres user drop

    - by Grasper
    I am trying to drop a user: drop user testUser; I want to force this to work in a simple manner (Not a million calls)... How can I do this easily? I get this output: ERROR: role "testUser" cannot be dropped because some objects depend on it DETAIL: access to table main.tap_db_version access to table main.user_instance access to table main.target_type access to table main.status_code access to table main.state_space_profile access to table main.service_subscription access to table main.service_instance access to table main.sa_ordnance_weapon_type access to table main.operation access to table main.mission_class access to table main.map_symbol access to table main.ada_weapon_type access to table main.active_process access to table main.acft_type_00_only access to table main.abp_create_params access to table main.exercise access to table main.decl access to table main.data_set access to table main.cancellation_notice access to table main.ato_family_tree access to table main.apportionment_cat_cd access to table main.abp access to table main.alert_settings access to table main.alert_log access to table main.airspace_usage_category access to schema main access to view testUser.top_priority access to view testUser.target_ssm_msn_count access to view testUser.target_air_msn_count access to view testUser.sortie_sum access to view testUser.ref_info access to view testUser.preview_rmk_count access to view testUser.preview_pgm_las_count access to view testUser.preview_pgm_desi_count access to view testUser.preview_objective_count access to view testUser.preview_gfriend_count access to view testUser.preview_escort_msn_req access to view testUser.preview_chaff_data access to view testUser.preview_airmove_seg access to view testUser.preview_aircraft_total access to view testUser.offload_total access to view testUser.objective_count access to view testUser.fuel_planned access to view testUser.ew_data access to view testUser.dual access to view testUser.current_base_inventory access to view testUser.cell_total access to view testUser.asgn_sortie_sum access to view testUser.appor_sorties_planned access to view testUser.airmove_seg access to view testUser.aircraft_total access to view testUser.abp access to table testUser.req_msn_task access to table testUser.req_task_source_req access to table testUser.req_ssm_msn access to table testUser.req_ssm_source access to table testUser.req_msn access to table testUser.req_msn_warnings access to table testUser.req_air_msn access to table testUser.req_src_header access to table testUser.req_msn_ids access to table testUser.req_msn_comment access to table testUser.req_c2_msn access to table testUser.req_c2_source access to table testUser.req_ada_msn access to table testUser.req_ada_vertex access to table testUser.weather_forecast access to table testUser.weather_coords access to table testUser.weather_area access to table testUser.weapon_option access to table testUser.wag_activity access to table testUser.unit_remark access to table testUser.unit_location_turn access to table testUser.unit_iff access to table testUser.unit_coordination access to table testUser.unit_code access to table testUser.trace_point access to table testUser.tasking_agency access to table testUser.task_unit access to table testUser.target_type access to table testUser.tap_db_version access to table testUser.status_code access to table testUser.state_space_threat access to table testUser.state_space_profile access to table testUser.state_space access to table testUser.ssm_mission access to table testUser.spins_section_id access to table testUser.spins_codes access to table testUser.spins access to table testUser.unit_location access to table testUser.ship_target_request access to table testUser.service_subscription access to table testUser.service_instance access to table testUser.sa_ordnance_weapon_type access to table testUser.runway access to table testUser.restricted_codes access to table testUser.response_entity access to table testUser.residual_mission access to table testUser.request_objective access to table testUser.request and 194 other objects (see server log for list)

    Read the article

  • Postgres user/role drop

    - by Grasper
    I am trying to drop a user: drop user testUser; I want to force this to work in a simple manner (Not a million calls)... How can I do this easily? I get this output: ERROR: role "testUser" cannot be dropped because some objects depend on it DETAIL: access to table main.tap_db_version access to table main.user_instance access to table main.target_type access to table main.status_code access to table main.state_space_profile access to table main.service_subscription access to table main.service_instance access to table main.sa_ordnance_weapon_type access to table main.operation access to table main.mission_class access to table main.map_symbol access to table main.ada_weapon_type access to table main.active_process access to table main.acft_type_00_only access to table main.abp_create_params access to table main.exercise access to table main.decl access to table main.data_set access to table main.cancellation_notice access to table main.ato_family_tree access to table main.apportionment_cat_cd access to table main.abp access to table main.alert_settings access to table main.alert_log access to table main.airspace_usage_category access to schema main access to view testUser.top_priority access to view testUser.target_ssm_msn_count access to view testUser.target_air_msn_count access to view testUser.sortie_sum access to view testUser.ref_info access to view testUser.preview_rmk_count access to view testUser.preview_pgm_las_count access to view testUser.preview_pgm_desi_count access to view testUser.preview_objective_count access to view testUser.preview_gfriend_count access to view testUser.preview_escort_msn_req access to view testUser.preview_chaff_data access to view testUser.preview_airmove_seg access to view testUser.preview_aircraft_total access to view testUser.offload_total access to view testUser.objective_count access to view testUser.fuel_planned access to view testUser.ew_data access to view testUser.dual access to view testUser.current_base_inventory access to view testUser.cell_total access to view testUser.asgn_sortie_sum access to view testUser.appor_sorties_planned access to view testUser.airmove_seg access to view testUser.aircraft_total access to view testUser.abp access to table testUser.req_msn_task access to table testUser.req_task_source_req access to table testUser.req_ssm_msn access to table testUser.req_ssm_source access to table testUser.req_msn access to table testUser.req_msn_warnings access to table testUser.req_air_msn access to table testUser.req_src_header access to table testUser.req_msn_ids access to table testUser.req_msn_comment access to table testUser.req_c2_msn access to table testUser.req_c2_source access to table testUser.req_ada_msn access to table testUser.req_ada_vertex access to table testUser.weather_forecast access to table testUser.weather_coords access to table testUser.weather_area access to table testUser.weapon_option access to table testUser.wag_activity access to table testUser.unit_remark access to table testUser.unit_location_turn access to table testUser.unit_iff access to table testUser.unit_coordination access to table testUser.unit_code access to table testUser.trace_point access to table testUser.tasking_agency access to table testUser.task_unit access to table testUser.target_type access to table testUser.tap_db_version access to table testUser.status_code access to table testUser.state_space_threat access to table testUser.state_space_profile access to table testUser.state_space access to table testUser.ssm_mission access to table testUser.spins_section_id access to table testUser.spins_codes access to table testUser.spins access to table testUser.unit_location access to table testUser.ship_target_request access to table testUser.service_subscription access to table testUser.service_instance access to table testUser.sa_ordnance_weapon_type access to table testUser.runway access to table testUser.restricted_codes access to table testUser.response_entity access to table testUser.residual_mission access to table testUser.request_objective access to table testUser.request and 194 other objects (see server log for list)

    Read the article

  • UnboundLocalError: local variable 'rows' referenced before assignment

    - by patrick
    i'm trying to make a database connection by an other script. But the script didn't work propperly. and if I do a 'print' on the rows then I get the value 'null' But if I use a 'select * from incidents' query then i get the result from the table incidents. import database rows = database.database("INSERT INTO incidents VALUES(3 ,'test_title1', 'test', TO_DATE('25-07-2012', 'DD-MM-YYYY'), CURRENT_TIMESTAMP, 'sector', 50, 60)") #print database.database() print rows database.py script: import psycopg2 import sys import logfile def database(query): logfile.log(20, 'database.py', 'Executing...') con = None try: con = psycopg2.connect(database='incidents', user='ipfit5', password='tester') cur = con.cursor() #print query cur.execute(query) rows = cur.fetchall() con.commit() #test row does work #cur.execute("INSERT INTO incidents VALUES(3 ,'test_titel1', 'test', TO_DATE('25-07-2012', 'DD-MM-YYYY'), CURRENT_TIMESTAMP, 'sector', 50, 60)") except: logfile.log(40, 'database.py', 'Er is iets mis gegaan') logfile.log(40, 'database.py', str(sys.exc_info())) finally: if con: con.close() return rows

    Read the article

  • Is it possible for double-escaping to cause harm to the DB?

    - by waiwai933
    If I accidentally double escape a string, can the DB be harmed? For the purposes of this question, let's say I'm not using parametrized queries For example, let's say I get the following input: bob's bike And I escape that: bob\'s bike But my code is horrible, and escapes it again: bob\\\'s bike Now, if I insert that into a DB, the value in the DB will be bob\'s bike Which, while is not what I want, won't harm the DB. Is it possible for any input that's double escaped to do something malicious to the DB assuming that I take all other necessary security precautions?

    Read the article

  • ERD Design help meeded

    - by Mobi
    Hello guyz, I am new to ERD and stuff.Earlier i was drawing an erd that issued me some problems. the name of two entities in focus is "Bus" and "Passenger".What shall be the relationship between them. I think it should be many to many since one passenger can travel in many buses and a bus can give ride to many passengers.But one of my friend insisted that its a one-to-many relationship(A bus can have many passengers but a passenger can travel in only one bus).Plz let me know what's right. Also , whats the relationship between a class,students. Any help is appreciated.

    Read the article

  • Will a database server perform better running on 2 CPUs with 16 cores or 4 CPUs with 8 cores?

    - by AlexOdin
    What I have: an online financial application (ASP.NET, C#) at peak we have 5K+ simultaneous users backend is running on Oracle 11g (active server + stand-by using Active Data Guard). At peak - 4K-5K database sessions Oracle is installed on Linux 5.8 (Oracle's unbreakable version) the database size: 7TB disk storage: NetApp (connected with 10GB network) I would like to replace old servers (IT will purchase HP blades BL685C). Servers will have 256GB of RAM. I need your help to figure out what to do with CPUs and cores. Options: 2 CPUs (2.3 GHz) with 16 cores each 4 CPUs (3.0 GHz) with 8 cores each Question: Which one should I pick? P.S. Next year, we will migrate from Oracle to SQL server. I hope, whatever option you recommend will work for both platforms

    Read the article

  • Best practice stock management when payment of customer failed using SQL Server and ASP.NET

    - by Martijn B
    Hi there, I am currently building a webshop for my own where I want to increment the product-stock when the user fails to complete payment within 10 minutes after the customer placed the order. I want to gather information from this thread to make a design decision. I am using SQL Server 2008 and ASP.NET 3.5. Should I use a SQL Server Job who intervals check the orders which are not payed yet or are there better solutions to do this. Thanks in advance! Martijn

    Read the article

  • What database is easy to maintain and manage in a cluster?

    - by Sanoj
    I'm looking for a database (DBMS) that is easy to scale out. I would like to have high availability so I need a multi-master cluster, where the data is replicated to two or more physical computers. I would also like to be able to start with one node (no replication), and then scale out to more nodes as needed without a reinstallation or downtime. I would like to have a DBMS that are easy to maintain and manage. It should be easy to add nodes, remove nodes, take live backup and monitor the use of resources. It doesn't have to be a relational database system, so a NoSQL is okey. And I would like to have a free version so I can test it in small scale and compare it with alternatives. What alternatives do I have?

    Read the article

  • Database server: Small quick RAM or large slow RAM?

    - by Josh Smeaton
    We are currently designing our new database servers, and have come up with a trade off I'm not entirely sure of how to answer. These are our options: 48GB 1333MHz, or 96GB 1066MHz. My thinking is that RAM should be plentiful for a Database Server (we have plenty and plenty of data, and some very large queries) rather than as quick as it could be. Apparently we can't get 16GB chips at 1333MHz, hence the choices above. So, should we get lots of slower RAM, or less faster RAM? Extra Info: Number of DIMM Slots Available: 6 Servers: Dell Blades CPU: 6 core (only single socket due to Oracle licensing).

    Read the article

  • recursive delete trigger and ON DELETE CASCADE contraints are not deleting everything

    - by bitbonk
    I have a very simple datamodel that represents a tree structure: The RootEntity is the root of such a tree, it can contain children of type ContainerEntity and of type AtomEntity. The type ContainerEntity again can contain children of type ContainerEntity and of type AtomEntity but can not contain children of type RootEntity. Children are referenced in a well known order. The DB model for this is below. My problem now is that when I delete a RootEntity I want all children to be deleted recursively. I have create foreign key with CASCADE DELETE and two delete triggers for this. But it is not deleting everything, it always leaves some items in the ContainerEntity, AtomEntity, ContainerEntity_Children and AtomEntity_Children tables. Seemling beginning with the recursionlevel of 3. CREATE TABLE RootEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_RootEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE ContainerEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_ContainerEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE AtomEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_AtomEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE RootEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_RootEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent RootEntity CONSTRAINT FK_RootEntiry_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES RootEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_RootEntiry_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_RootEntiry_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TABLE ContainerEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_ContainerEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent ContainerEntity CONSTRAINT FK_ContainerEntity_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_ContainerEntity_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_ContainerEntity_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TRIGGER Delete_RootEntity_Children ON RootEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO CREATE TRIGGER Delete_ContainerEntiy_Children ON ContainerEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO

    Read the article

  • visual studio 2010 database project, is there a visual way?

    - by b0x0rz
    started a visual studio 2010 database project. however i am only able to write sql in a text mode, there is no functionality in making the table for example in a visual view as exists when you add a new database to app_data folder and the work on it there. is this the only way and there is no visual way of doing this in the visual studio 2010 database project? or am i missing some obvious way of getting to it? thank you also if there is a tutorial anywhere (video maybe!?) please link it. i only found importing a database from an existing script video using a wizard. would like new database from scratch without wizard.

    Read the article

  • structured vs. unstructured data in db

    - by Igor
    the question is one of design. i'm gathering a big chunk of performance data with lots of key-value pairs. pretty much everything in /proc/cpuinfo, /proc/meminfo/, /proc/loadavg, plus a bunch of other stuff, from several hundred hosts. right now, i just need to display the latest chunk of data in my UI. i will probably end up doing some analysis of the data gathered to figure out performance problems down the road, but this is a new application so i'm not sure what exactly i'm looking for performance-wise just yet. i could structure the data in the db -- have a column for each key i'm gathering. the table would end up being O(100) columns wide, it would be a pain to put into the db, i would have to add new columns if i start gathering a new stat. but it would be easy to sort/analyze the data just using SQL. or i could just dump my unstructured data blob into the table. maybe three columns -- host id, timestamp, and a serialized version of my array, probably using JSON in a TEXT field. which should I do? am i going to be sorry if i go with the unstructured approach? when doing analysis, should i just convert the fields i'm interested in and create a new, more structured table? what are the trade-offs i'm missing here?

    Read the article

  • Synonym for "Many-to-Many" relationship (relational databases)

    - by Byron
    What's a synonym for a "many-to-many" relationship? I've finished writing an object-relational mapper but I'm still stumped as to what to name the function that adds that relation. addParent() and addChild() seemed quite logical for the many-to-one/one-to-many and addSuperclass() for one-to-one inheritance, but addManyToMany() would sound quite unintuitive to an object-oriented programmer. addSibling() or addCousin() doesn't really make sense either. Any suggestions? And before you dismiss this as a non-programming question, please remember that consistent naming schemes and encapsulation are pretty integral to programming :)

    Read the article

  • Should this even be a has_many :through association?

    - by GoodGets
    A Post belongs_to a User, and a User has_many Posts. A Post also belongs_to a Topic, and a Topic has_many Posts. class User < ActiveRecord::Base has_many :posts end class Topic < ActiveRecord::Base has_many :posts end class Post < ActiveRecord::Base belongs_to :user belongs_to :topic end Well, that's pretty simple and very easy to set up, but when I display a Topic, I not only want all of the Posts for that Topic, but also the user_name and the user_photo of the User that made that Post. However, those attributes are stored in the User model and not tied to the Topic. So how would I go about setting that up? Maybe it can already be called since the Post model has two foreign keys, one for the User and one for the Topic? Or, maybe this is some sort of "one-way" has_many through assiociation. Like the Post would be the join model, and a Topic would has_many :users, :through = :posts. But the reverse of this is not true. Like a User does NOT has_many :topics. So would this even need to be has_many :though association? I guess I'm just a little confused on what the controller would look like to call both the Post and the User of that Post for a give Topic. Edit: Seriously, thank you to all that weighed in. I chose tal's answer because I used his code for my controller; however, I could have just as easily chosen either j.'s or tim's instead. Thank you both as well. This was so damn simple to implement, and I think today marks the day that I'm beginning to fall in love with rails.

    Read the article

  • GORM ID generation and belongsTo association ?

    - by fabien-barbier
    I have two domains : class CodeSetDetail { String id String codeSummaryId static hasMany = [codes:CodeSummary] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_set_detail_id', generator: 'assigned' } } and : class CodeSummary { String id String codeClass String name String accession static belongsTo = [codeSetDetail:CodeSetDetail] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_summary_id', generator: 'assigned' } } I get two tables with columns: code_set_detail: code_set_detail_id code_summary_id and code_summary: code_summary_id code_set_detail_id (should not exist) code_class name accession I would like to link code_set_detail table and code_summary table by 'code_summary_id' (and not by 'code_set_detail_id'). Note : 'code_summary_id' is define as column in code_set_detail table, and define as primary key in code_summary table. To sum-up, I would like define 'code_summary_id' as primary key in code_summary table, and map 'code_summary_id' in code_set_detail table. How to define a primary key in a table, and also map this key to another table ?

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >