Search Results

Search found 32851 results on 1315 pages for 'vsts database edition'.

Page 195/1315 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • VS 2010 Server Explorer Database Showing No Tables

    - by Andy
    I'm working on a .Net application that needs to read from an Oracle 10g database behind Siebel. In VS 2010 Server Explorer, I've created a connection using the OracleClient type connector with a reference to the Oracle TNS service name as the "server name." The "Test Connection" button shows that the connection is successful. However, in the Server Explorer, when I go to expand the Tables, no tables are shown. I know for a fact that there are 3000+ tables in the database (thanks Siebel). Anyone know what's happening here? I'd like to create an Entity Framework 4.0 Entity Data Model... Thanks for the help! Andy

    Read the article

  • Wisdom of merging 100s of Oracle instances into one instance

    - by hoytster
    Our application runs on the web, is mostly an inquiry tool, does some transactions. We host the Oracle database. The app has always had a different instance of Oracle for each customer. A customer is a company which pays us to provide our service to the company's employees, typically 10,000-25,000 employees per customer. We do a major release every few years, and migrating to that new release is challenging: we might have a team at the customer site for a couple weeks, explaining new functionality and setting up the driving data to suit that customer. We're considering going multi-client, putting all our customers into a single shared Oracle 11g instance on a big honkin' Windows Server 2008 server -- in order to reduce costs. I'm wondering if that's advisable. There are some advantages to having separate instances for each customer. Tell me if these are bogus, please. In my rough guess about decreasing importance: Our customers MyCorp and YourCo can be migrated separately when breaking changes are made to the schema. (With multi-client, we'd be migrating 300+ customers overnight!?!) MyCorp's data can be easily backed up and (!!!) restored, without affecting other customers. MyCorp's data is securely separated from their competitor YourCo's data, without depending on developers to get the code right and/or DBAs getting the configuration right. Performance is better because the database is smaller (5,000 vs 2,000,000 rows in ~50 tables). If MyCorp's offices are (mostly) in just one region, then the MyCorp's instance can be geographically co-located there, so network lag doesn't hurt performance. We can provide better service to global clients, for the same reason. In MyCorp wants to take their database in-house, then we can easily export their instance, to get MyCorp their data. Load-balancing is easier because instances can be placed on different servers (this is with a web farm). When a DEV or QA instance is needed, it's easier to clone the real instance and anonymize the data, because there's much less data. Because they're small enough, developers can have their own instance running locally, so they can work on code while waiting at the airport and while in-flight, without fighting VPN hassles. Q1: What are other advantages of separate instances? We are contemplating changing the database schema and merging all of our customers into one Oracle instance, running on one hefty server. Here are advantages of the multi-client instance approach, most important first (my WAG). Please snipe if these are bogus: Less work for the DBAs, since they only need to maintain one instance instead of hundreds. Less DBA work translates to cheaper, our main motive for this change. With just one instance, the DBAs can do a better job of optimizing performance. They'll have time to add appropriate indexes and review our SQL. It will be easier for developers to debug & enhance the application, because there is only one schema and one app (there might be dozens of schema versions if there are hundreds of instances, with a different version of the app for each version of the schema). This reduces costs too. The alternative is having to start every debug session with (1) What version is this customer running and (2) Let's struggle to recreate the corresponding development environment, code and database. (We need a Virtual Machine that includes the code AND database instance for each patch and release!) Licensing Oracle is cheaper because it's priced per server irrespective of heft (or something -- I don't know anything about the subject). The database becomes a viable persistent store for web session data, because there is just one instance. Some database operations are easier with one multi-client instance, like finding a participant when they're hazy about which customer they (or their spouse, maybe) works for: all the names are in one table. Reporting across customers is straightforward. Q2: What are other advantages of having multiple clients in one instance? Q3: Which approach do you think is better (why)? Instance per customer, or all customers in one instance? I'm concerned that having one multi-client instance makes migration near-impossible, and that's a deal killer... ... unless there is a compromise solution like having two multi-client instances, the old and the new. In that case case, we would design cross-instance solutions for finding participants, reporting, etc. so customers could go from one multi-client instance to the next without anything breaking. THANKS SO MUCH for your collective advice! This issue is beyond me -- but not beyond the collective you. :) Hoytster

    Read the article

  • How to restore PostgreSQL database from .tar file?

    - by Stephen
    I have all PostgreSQL databases backed up during incremental backups using WHM, which creates a $dbName.tar file. Data is stored in these .tar files, but I do not know how to restore it back into the individual databases via SSH. In particular the file location. I have been using: pg_restore -d client03 /backup/cpbackup/daily/client03/psql/client03.tar which generates the error 'could not open input file: Permission denied' Any assistance appreciated.

    Read the article

  • How to recreate missing Team Foundation Server database?

    - by Amadiere
    I've been trying out TFS 2010 Beta 2 on my local machine, or at least, had installed ready to do so. I had some issues with my MSSQL2008 server so I completely uninstalled and re-installed it and that sorted it. However, I'm now in limbo with TFS. I have the software installed, but it has none of the SQL databases installed that go with it. I had no data and am not precious about how to go about it. I figure completely uninstalling and re-installing might be an idea and will most likely fix it (repair didn't work). Is there a quicker way? Is there a command line utility that I can run, or a SQL script to recreate it all?

    Read the article

  • database security with php page that spits out XML

    - by Rees
    Hello, I just created a PHP page that spits outs some data from my database in an XML format. This data is fetched from a flex application I made. I had spent a long time formatting my tables and database information and do not want anyone to be able to simply type www.mysite.com/page_that_spits_out_XML.php and steal my data. However, at the same time I need to be able to access this page from my flex application. Is there a way I can prevent other people from doing this? Thank you!

    Read the article

  • Normalization in plain English

    - by Yada
    I sort of understand the concept of database normalization but always have a hard time explaining it in plain English especially for a job interview. I have read the wikipedia post, but still find it hard to explain the concept to none developers. "Design a database in a way not to get duplicated data" is the first thing that comes to mind. Does anyone was a nice way to explain the concept of database normalization in plain English. And what are some nice examples to show the differences between first, second and third normal forms. Say you go to a job interview and the person asks: Explain the concept of normalization and how would go about designing a normalized database. What key points are the interviewer looking for?

    Read the article

  • mySQL removes first digit

    - by kielie
    Hi guys, I am inputting data into a mySQL database via a PHP script, but for some reason when I check the database, all of the phone numbers have their first digit removed, like so, 0123456789 shows up as 123456789 in the database, but if I change the data type from INT to TEXT, it shows correctly, I am very hesitant to keep it as TEXT though, as I am sure this will cause complications further down the road as the database app starts to become more complicated, here is the PHP code. <?php $gender = $_POST['gender']; $first_name = $_POST['first_name']; $second_name = $_POST['second_name']; $id_number = $_POST['id_number']; $home_number = $_POST['home_number']; $cell_work = $_POST['cell_work']; $email_address = $_POST['email_address']; $curDate = date("Y-m-d"); mysql_connect ("server", "user", "pass") or die ('Error: ' . mysql_error()); mysql_select_db ("database"); $query = "INSERT INTO table (id,gender,first_name,second_name,id_number,home_number,cell_work,email_address,date) VALUES('NULL','".$gender."','".$first_name."','".$second_name."','".$id_number."','".$home_number."','".$cell_work."','".$email_address."','".$curDate."' )"; mysql_query($query) or die (mysql_error()); ?> Thanx in advance!

    Read the article

  • Storing date and time as epoch vs native datetime format in the database

    - by zakovyrya
    For most of my tasks I find it much easier to work with date and time in the epoch format: it's trivial to calculate timespan or determine if some event happened before or after another, I don't have to deal with time-zone issues if the data comes from different geographical sources, in case of scripting languages what I usually get from database when I request a datetime-typed column is a string that I need to parse in order to work with it. This list can go on, but for me in order to keep my code portable that's enough to ditch database's native datetime format and store date and time as integer. What do you guys think?

    Read the article

  • Calculation with RESTful web service with MySQL database

    - by Dobby
    I am now making some RESTful web services with MySQL database. I used NetBeans to create the resources of RESTful service with MySQL, and now I can now use GET and POST/PUT to list and add/modify data entities in the MySQL server. Currently, I wish to make some calculations right after a client makes the POST activities, then the posted data with calculated results will be inserted into the MySQL database. I am very new to this, I guess I need to add some functions and call them to calculate but I don't know where and how to do that : ( Could any one help me on this issue? Thanks a lot in advance! : )

    Read the article

  • Store JsTree order back into database

    - by Scott
    Hi I am using JsTree and got some of it working with my database. Such as deleting a node, renaming a node, etc. I am having problems with saving the ROOT folders order back to the database. When I move the root folders, it doesnt save order. When I move the sub folders around, it saves order fine. Anyone know what I am doing wrong? I think it's my javascript for the onmove. here's a demo of what I am talking about. http://healthsharing.com/jstree/demo.php

    Read the article

  • PostGIS - can't create spatially-enabled database

    - by itgorilla
    I'm using Ubuntu 10.10, PostgreSQL 9.0 and PostGIS 1.5. I've installed PostGIS 1.5 from: https://launchpad.net/~ubuntugis/+archive/ubuntugis-unstable I used PPA first then the command: sudo apt-get install postgis to install postgis. I've been following these instructions to create a spatially-enabled database: http://ostgis.refractions.net/docs/ch02.html#id2630100 I got to the point where it's saying: Now load the PostGIS object and function definitions into your database by loading the postgis.sql definitions file (located in [prefix]/share/contrib as specified during the configuration step). psql -d [yourdatabase] -f postgis.sql Well, there is no postgis.sql on my server after the installation. I did an sudo updatedb to make sure I can find postgis.sql but it's not there. Any ideas? Thank you!

    Read the article

  • Are database triggers evil?

    - by WW
    Are database triggers a bad idea? In my experience they are evil, because they can result in surprising side effects, and are difficult to debug (especially when one trigger fires another). Often developers do not even think of looking if there is a trigger. On the other hand, it seems like if you have logic that must occur evertime a new FOO is created in the database then the most foolproof place to put it is an insert trigger on the FOO table. The only time we're using triggers is for really simple things like setting the ModifiedDate.

    Read the article

  • Restoring database with SMO - Reporting progress problems

    - by madlan
    I'm using the below to restore a database in VB.NET. This works but causes the interface to lockup if the user clicks anything. Also, I cannot get the progress label to update incrementally, it's blank until the backup is complete then displays 100% Sub DoRestore() Dim svr As Server = New Server("Server\SQL2008") Dim res As Restore = New Restore() res.Devices.AddDevice("C:\MyDB.bak", DeviceType.File) res.Database = "MyDB" res.RelocateFiles.Add(New RelocateFile("MyDB_Data", "C:\MyDB.mdf")) res.RelocateFiles.Add(New RelocateFile("MyDB_Log", "C:\MyDB.ldf")) res.PercentCompleteNotification = 1 AddHandler res.PercentComplete, AddressOf ProgressEventHandler res.SqlRestore(svr) End Sub Private Sub ProgressEventHandler(ByVal sender As Object, ByVal e As PercentCompleteEventArgs) Label3.Text = "" ProgressBar.Value = e.Percent LblProgress.Text = e.Percent.ToString End Sub

    Read the article

  • a better way to initialize database ONCE when rail server starts

    - by Hadi
    i would like to initialize database the first time the server is started, that involve calling a class method. Given the class name is: Product At first i put an .rb file config\initializers\init.rb as it gets automatically called. Everything works ok, until the database is deleted, and i am trying to do rake db:migrate. rake db:migrate fails saying that cannot find 'product' table. inside init.rb = Product.populate_db The solution i came up is: I took out init.rb do rake db:migrate put back init.rb run the server. My rails application is mainly for reporting and the data is seeded from other application, so i have to do the above step everyday Is there a better way to do the initialization?

    Read the article

  • LDAP: Extend database using referral

    - by ecapstone
    My company uses an off-site LDAP server to handle authentication. I'm currently working on a local VPN for my branch that needs to use the off-site LDAP to check user's usernames and passwords, but I don't want every employee to have access to the VPN - I need to be able to control whether users can authenticate with the off-site LDAP based on whether they're allowed to use the VPN. My current solution involves having our own local LDAP server, which has a referral to the off-site server (I got most of my information from here: http://www.zytrax.com/books/ldap/ch7/referrals.html). This means that when local users try to check their credentials with the local server, it redirects them to the off-site server, which checks the credentials. This works for authentication, but not for authorization. It would be easiest to add a vpn_users group or is_vpn_user attribute on the off-site server, but, well, that's above my pay grade. Is there any way I can use the local server to control whether users have access to the VPN without needing to change the off-site server? If I could somehow use it to have a local vpn_users group without the users in it having to be located on the local server, that would probably work, but I have no idea how to set that up or if LDAP even supports such a configuration. For reference, I'm using the openvpn-auth-ldap (https://code.google.com/p/openvpn-auth-ldap/) plugin.

    Read the article

  • Multiple Tables or Multiple Schema

    - by Yan Cheng CHEOK
    http://stackoverflow.com/questions/1152405/postgresql-is-better-using-multiple-databases-with-1-schema-each-or-1-database I am new in schema concept for PostgreSQL. For the above mentioned scenario, I was wondering Why don't we use a single database (with default schema named public) Why don't we have a single table, to store multiple users row? Other tables which hold users related information, with foreign key point to the user table. Can anyone provide me a real case scenario, which single database, multiple schema will be extremely useful, and can't solve by conventional single database, single schema.

    Read the article

  • Load a MySQL innodb database into memory

    - by jack
    I have a MySQL innodb database at 1.9GB, showed by following command. SELECT table_schema "Data Base Name", -> sum( data_length + index_length ) / 1024 / -> 1024 "Data Base Size in MB", -> sum( data_free )/ 1024 / 1024 "Free Space in MB" -> FROM information_schema.TABLES -> GROUP BY table_schema ; +--------------------+----------------------+------------------+ | Data Base Name | Data Base Size in MB | Free Space in MB | +--------------------+----------------------+------------------+ | database_name | 1959.73437500 | 31080.00000000 | My questions are: Does it mean if I set the innodb_buffer_pool_size to 2GB or larger, the whole database can be loaded into memory so much fewer read from disk requests are needed? What does the free space of 31GB mean? If the maximum RAM can be allocated to innodb_buffer_pool_size is 1GB, is it possible to specify which tables to loaded into memory while keep others always read from disk? Thanks in advance.

    Read the article

  • PHP Error handling in WordPress plugin

    - by AaronM
    I am a newbie to both PHP and Wordpress (but do ok in C#), and am struggling to understand the error handling in a custom plugin I am attempting to write. The basics of the plugin is to query an exsiting MSSQL database (note its not the standard MYSQL db...) and return the rows back to the screen. This was working well, but the hosting provider has taken my database offline, which led me to the error handling problem (which I thought was ok). The following code is failing to connect to the database (as expected), but puts an error onto the screen and stops the page processing. It does not even output the 'or die' error text. QUESTION: How can I just output a simple "Cant load data" message, and continue on normally? function generateData() { global $post; if ("$post->post_title" == "Home") { try { $myServer = "<servername>"; $myUser = "<username>"; $myPass = "<password>"; $myDB = "<dbName>"; //connection to the database $dbhandle = mssql_connect($myServer, $myUser, $myPass) or die("Couldn't open database $myDB"); //... query processing here... } catch (Exception $e) { echo "Cannot load data<br />"; } } return $content; } Error being generated: (line 31 is $dbhandle = mssql_connect...) Warning: mssql_connect() [function.mssql-connect]: Unable to connect to server: <servername> in <file path> on line 31 Fatal error: Maximum execution time of 30 seconds exceeded in <file path> on line 31

    Read the article

  • Zend database query result converts column values to null

    - by David Zapata
    Hi again. I am using the next instructions to get some registers from my Database. Create the needed models (from the params module): $obj_paramtype_model = new Params_Model_DbTable_Paramtype(); $obj_param_model = new Params_Model_DbTable_Param(); Getting the available locales from the database // This returns a Zend_Db_Table_Row_Abstract class object $obj_paramtype = $obj_paramtype_model->getParamtypeByValue('available_locales'); // This is a query used to add conditions to the next sentence. This is executed from the Params_Model_DbTable_Param instance class, that depends from Params_Model_DbTable_Paramtype class (reference map and dependentTables arrays are fine in both classes) $obj_select = $this->select()->where('deleted_at IS NULL')->order('name'); // Execute the next query, applying the select restrictions. This returns a Zend_Db_Table_Rowset_Abstract class object. This means "Find Params by Paramtype" $obj_params_rowset = $obj_paramtype->findDependentRowset('Params_Model_DbTable_Param', 'Paramtype', $obj_paramtype); // Here the firebug log displays the queries.... Zend_Registry::get('log')->debug($obj_params_rowset); I have a profiler for all my DB executions from Zend. At this point the log and profiler objects (that includes Firebug writers), shows the executed SQL Queries, and the last line displays the resulting Zend_Db_Table_Rowset_Abstract class object. If I execute the SQL Queries in some MySQL Client, the results are as expected. But the Zend Firebug log writer displays as NULL the column values with latin characters (ñ). In other words, the external SQL client shows es_CO | Español de Colombia and en_US | English of United States but the Query results from Zend displays (and returns) es_CO | null and en_US | English of United States. I've deleted the ñ character from Español de Colombia and the query results are just fine in my Zend Log Firebug screen, and in the final Zend Form element. The MySQL database, tables and columns are in UTF-8 - utf8_unicode_ci collation. All my zend framework pages are in UTF-8 charset. I'm using XAMPP 1.7.1 (PHP 5.2.9, Apache at port 90 and MySQL 5.1.33-community) running on Windows 7 Ultimate; Zend Framework 1.10.1. I'm sorry if there is so much information, but I don't really know why could that happen, so I tryed to provide as much related information as I could to help to find some answer.

    Read the article

  • Using the ASP.NET membership provider database with your own database?

    - by Shaharyar
    Hello everybody, We are developing an ASP.NET MVC Application that currently uses it's own databse ApplicationData for the domain models and another one Membership for the user management / membership provider. We do access restrictions using data-annotations in our controllers. [Authorize(Roles = "administrators, managers")] This worked great for simple use cases. As we are scaling our application our customer wants to restrict specific users to access specific areas of our ApplicationData database. Each of our products contains a foreign key referring to the region the product was assembled in. A user story would be: Users in the role NewYorkManagers should only be able to edit / see products that are assembled in New York. We created a placeholder table UserRightsRegions that contains the UserId and the RegionId. How can I link both the ApplicationData and the Membership databases in order to work properly / having cross-database-key-references? (Is something like this even possible?) All help is more than appreciated!

    Read the article

  • testing database application remotely

    - by vbNewbie
    With advice from users here I was able to deploy an application that connects with sql server 2008 database on to a server. I have the connection string with data source pointing to my machine since the database is stored on my machine temporarily. I do not have access to another machine and wanted to test the application so I remotely connected to the server to test the application and it does not connect to the server. I have TCP/IP enables, port to defaul 1433, and remote connections checked. Is there something I am missing? Please help

    Read the article

  • Which third party tools/library are available for NoSQL databases?

    - by Horcrux7
    I know that NoSQL databases are very new. I am also new in this point. But exist already tools/libraries to make the life easer like for SQL databases? I think on tools for managing, maintaining, viewing or reporting of the data. There can also be libraries for easer working with the database or an abstract database layer to change the database later. I would prefer Java libraries but also other are interesting.

    Read the article

  • jQuery Form Processing With PHP to MYSQL Database Using $.ajax Request

    - by FrustratedUser
    Question: How can I process a form using jQuery and the $.ajax request so that the data is passed to a script which writes it to a database? Problem: I have a simple email signup form that when processed, adds the email along with the current date to a table in a MySQL database. Processing the form without jQuery works as intended, adding the email and date. With jQuery, the form submits successfully and returns the success message. However, no data is added to the database. Any insight would be greatly appreciated! <!-- PROCESS.PHP --> <?php // DB info $dbhost = '#'; $dbuser = '#'; $dbpass = '#'; $dbname = '#'; // Open connection to db $conn = mysql_connect($dbhost, $dbuser, $dbpass) or die ('Error connecting to mysql'); mysql_select_db($dbname); // Form variables $email = $_POST['email']; $submitted = $_POST['submitted']; // Clean up function cleanData($str) { $str = trim($str); $str = strip_tags($str); $str = strtolower($str); return $str; } $email = cleanData($email); $error = ""; if(isset($submitted)) { if($email == '') { $error .= '<p class="error">Please enter your email address.</p>' . "\n"; } else if (!eregi("^[A-Z0-9._%-]+@[A-Z0-9._%-]+\.[A-Z]{2,4}$", $email)) { $error .= '<p class="error">Please enter a valid email address.</p>' . "\n"; } if(!$error){ echo '<p id="signup-success-nojs">You have successfully subscribed!</p>'; // Add to database $add_email = "INSERT INTO subscribers (email,date) VALUES ('$email',CURDATE())"; mysql_query($add_email) or die(mysql_error()); }else{ echo $error; } } ?> <!-- SAMPLE.PHP --> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Sample</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js"></script> <script type="text/javascript"> $(document).ready(function(){ // Email Signup $("form#newsletter").submit(function() { var dataStr = $("#newsletter").serialize(); alert(dataStr); $.ajax({ type: "POST", url: "process.php", data: dataStr, success: function(del){ $('form#newsletter').hide(); $('#signup-success').fadeIn(); } }); return false; }); }); </script> <style type="text/css"> #email { margin-right:2px; padding:5px; width:145px; border-top:1px solid #ccc; border-left:1px solid #ccc; border-right:1px solid #eee; border-bottom:1px solid #eee; font-size:14px; color:#9e9e9e; } #signup-success { margin-bottom:20px; padding-bottom:10px; background:url(../img/css/divider-dots.gif) repeat-x 0 100%; display:none; } #signup-success p, #signup-success-nojs { padding:5px; background:#fff; border:1px solid #dedede; text-align:center; font-weight:bold; color:#3d7da5; } </style> </head> <body> <?php include('process.php'); ?> <form id="newsletter" class="divider" name="newsletter" method="post" action=""> <fieldset> <input id="email" type="text" name="email" /> <input id="submit-button" type="image" src="<?php echo $base_url; ?>/assets/img/css/signup.gif" alt=" SIGNUP " /> <input id="submitted" type="hidden" name="submitted" value="true" /> </fieldset> </form> <div id="signup-success"><p>You have successfully subscribed!</p></div> </body> </html>

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >