Search Results

Search found 43235 results on 1730 pages for 'database connection'.

Page 13/1730 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Drop database on DB2 9.5 - SQL1035N The database is currently in use

    - by Tommy
    I've never got this working the first time, but now I can't seem to do i at all. There is a connection pool somewhere using the database, so trying to drop the database when an application is using the database should give this error. The problem is there are no connection to the database when I issue these commands: db2 connect to mydatabase db2 quiesce database immediate force connections db2 connect reset db2 drop database mydatabase This allways give: SQL1035N The database is currently in use. SQLSTATE=57019 running this command shows no connections/applications DB2 list applications I can even deactivate the database, but still can't drop it. db2 => deactivate database mydatabase DB20000I The DEACTIVATE DATABASE command completed successfully. db2 => drop database mydatabase SQL1035N The database is currently in use. SQLSTATE=57019 db2 => Anyone got any clues? I'm running the cmd-windows as the local administrator (windows 2008) and this is also the admin for DB2. The connectionpool-user cannot connect during quiesce-state.

    Read the article

  • Drop database on DB2 9.5 - SQL1035N The database is currently in use

    - by Tommy
    I've never got this working the first time, but now I can't seem to do i at all. There is a connection pool somewhere using the database, so trying to drop the database when an application is using the database should give this error. The problem is there are no connection to the database when I issue these commands: db2 connect to mydatabase db2 quiesce database immediate force connections db2 connect reset db2 drop database mydatabase This allways give: SQL1035N The database is currently in use. SQLSTATE=57019 running this command shows no connections/applications DB2 list applications I can even deactivate the database, but still can't drop it. db2 => deactivate database mydatabase DB20000I The DEACTIVATE DATABASE command completed successfully. db2 => drop database mydatabase SQL1035N The database is currently in use. SQLSTATE=57019 db2 => Anyone got any clues? I'm running the cmd-windows as the local administrator (windows 2008) and this is also the admin for DB2. The connectionpool-user cannot connect during quiesce-state.

    Read the article

  • Problem with hadoop start-dfs.sh

    - by user288501
    I installed and configured hadoop on my Ubuntu 14.04 server, virtualized inside of hyper-v, however I am getting an issue when i run start-dfs.sh root@sUbuntu01:/var/log# start-dfs.sh 14/06/04 15:27:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. localhost] sed: -e expression #1, char 6: unknown option to `s' -c: Unknown cipher type 'cd' localhost: Ubuntu 14.04 LTS localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-sUbuntu01.out noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Name or service not known '-z: ssh: Could not resolve hostname '-z: Name or service not known 'execstack: ssh: Could not resolve hostname 'execstack: Name or service not known disabled: ssh: Could not resolve hostname disabled: Name or service not known with: ssh: Could not resolve hostname with: Name or service not known have: ssh: Could not resolve hostname have: Name or service not known VM: ssh: Could not resolve hostname vm: Name or service not known stack: ssh: Could not resolve hostname stack: Name or service not known guard: ssh: Could not resolve hostname guard: Name or service not known fix: ssh: Could not resolve hostname fix: Name or service not known VM: ssh: Could not resolve hostname vm: Name or service not known the: ssh: Could not resolve hostname the: Name or service not known to: ssh: Could not resolve hostname to: Name or service not known warning:: ssh: Could not resolve hostname warning:: Name or service not known it: ssh: Could not resolve hostname it: Name or service not known now.: ssh: Could not resolve hostname now.: Name or service not known library: ssh: Could not resolve hostname library: Name or service not known will: ssh: Could not resolve hostname will: Name or service not known link: ssh: Could not resolve hostname link: Name or service not known or: ssh: Could not resolve hostname or: Name or service not known It's: ssh: Could not resolve hostname it's: Name or service not known <libfile>',: ssh: Could not resolve hostname <libfile>',: Name or service not known which: ssh: connect to host which port 22: Connection timed out have: ssh: connect to host have port 22: Connection timed out you: ssh: connect to host you port 22: Connection timed out try: ssh: connect to host try port 22: Connection timed out the: ssh: connect to host the port 22: Connection timed out highly: ssh: connect to host highly port 22: Connection timed out might: ssh: connect to host might port 22: Connection timed out loaded: ssh: connect to host loaded port 22: Connection timed out You: ssh: connect to host you port 22: Connection timed out guard.: ssh: connect to host guard. port 22: Connection timed out library: ssh: connect to host library port 22: Connection timed out Server: ssh: connect to host server port 22: Connection timed out fix: ssh: connect to host fix port 22: Connection timed out The: ssh: connect to host the port 22: Connection timed out recommended: ssh: connect to host recommended port 22: Connection timed out that: ssh: connect to host that port 22: Connection timed out stack: ssh: connect to host stack port 22: Connection timed out OpenJDK: ssh: connect to host openjdk port 22: Connection timed out 64-Bit: ssh: connect to host 64-bit port 22: Connection timed out with: ssh: connect to host with port 22: Connection timed out localhost: Ubuntu 14.04 LTS localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-sUbuntu01.out localhost: OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. localhost: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. Starting secondary namenodes [OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 0.0.0.0] sed: -e expression #1, char 6: unknown option to `s' warning:: ssh: Could not resolve hostname warning:: Name or service not known -c: Unknown cipher type 'cd' It's: ssh: Could not resolve hostname it's: Name or service not known 'execstack: ssh: Could not resolve hostname 'execstack: Name or service not known '-z: ssh: Could not resolve hostname '-z: Name or service not known 0.0.0.0: Ubuntu 14.04 LTS 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-sUbuntu01.out 0.0.0.0: OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. 0.0.0.0: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Name or service not known <libfile>',: ssh: Could not resolve hostname <libfile>',: Name or service not known link: ssh: Could not resolve hostname link: No address associated with hostname it: ssh: Could not resolve hostname it: No address associated with hostname to: ssh: connect to host to port 22: Connection timed out or: ssh: connect to host or port 22: Connection timed out you: ssh: connect to host you port 22: Connection timed out guard.: ssh: connect to host guard. port 22: Connection timed out VM: ssh: connect to host vm port 22: Connection timed out stack: ssh: connect to host stack port 22: Connection timed out library: ssh: connect to host library port 22: Connection timed out Server: ssh: connect to host server port 22: Connection timed out might: ssh: connect to host might port 22: Connection timed out stack: ssh: connect to host stack port 22: Connection timed out You: ssh: connect to host you port 22: Connection timed out now.: ssh: connect to host now. port 22: Connection timed out disabled: ssh: connect to host disabled port 22: Connection timed out have: ssh: connect to host have port 22: Connection timed out will: ssh: connect to host will port 22: Connection timed out The: ssh: connect to host the port 22: Connection timed out have: ssh: connect to host have port 22: Connection timed out try: ssh: connect to host try port 22: Connection timed out the: ssh: connect to host the port 22: Connection timed out guard: ssh: connect to host guard port 22: Connection timed out the: ssh: connect to host the port 22: Connection timed out recommended: ssh: connect to host recommended port 22: Connection timed out with: ssh: connect to host with port 22: Connection timed out library: ssh: connect to host library port 22: Connection timed out 64-Bit: ssh: connect to host 64-bit port 22: Connection timed out fix: ssh: connect to host fix port 22: Connection timed out which: ssh: connect to host which port 22: Connection timed out VM: ssh: connect to host vm port 22: Connection timed out OpenJDK: ssh: connect to host openjdk port 22: Connection timed out fix: ssh: connect to host fix port 22: Connection timed out highly: ssh: connect to host highly port 22: Connection timed out that: ssh: connect to host that port 22: Connection timed out with: ssh: connect to host with port 22: Connection timed out loaded: ssh: connect to host loaded port 22: Connection timed out 14/06/04 15:36:02 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Any advice?

    Read the article

  • Managing Database Clusters - A Whole Lot Simpler

    - by mat.keep(at)oracle.com
    Clustered computing brings with it many benefits: high performance, high availability, scalable infrastructure, etc.  But it also brings with it more complexity.Why ?  Well, by its very nature, there are more "moving parts" to monitor and manage (from physical, virtual and logical hosts) to fault detection and failover software to redundant networking components - the list goes on.  And a cluster that isn't effectively provisioned and managed will cause more downtime than the standalone systems it is designed to improve upon.  Not so great....When it comes to the database industry, analysts already estimate that 50% of a typical database's Total Cost of Ownership is attributable to staffing and downtime costs.  These costs will only increase if a database cluster is to hard to properly administer.Over the past 9 months, monitoring and management has been a major focus in the development of the MySQL Cluster database, and on Tuesday 12th January, the product team will be presenting the output of that development in a new webinar.Even if you can't make the date, it is still worth registering so you will receive automatic notification when the on-demand replay is availableIn the webinar, the team will cover:    * NDBINFO: released with MySQL Cluster 7.1, NDBINFO presents real-time status and usage statistics, providing developers and DBAs with a simple means of pro-actively monitoring and optimizing database performance and availability.    * MySQL Cluster Manager (MCM): available as part of the commercial MySQL Cluster Carrier Grade Edition, MCM simplifies the creation and management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one!    * MySQL Cluster Advisors & Graphs: part of the MySQL Enterprise Monitor and available in the commercial MySQL Cluster Carrier Grade Edition, the Enterprise Advisor includes automated best practice rules that alert on key performance and availability metrics from MySQL Cluster data nodes.You'll also learn how you can get started evaluating and using all of these tools to simplify MySQL Cluster management.This session will last round an hour and will include interactive Q&A throughout. You can learn more about MySQL Cluster Manager from this whitepaper and on-line demonstration.  You can also download the packages from eDelivery (just select "MySQL Database" as the product pack, select your platform, click "Go" and then scroll down to get the software).While managing clusters will never be easy, the webinar will show hou how it just got a whole lot simpler !

    Read the article

  • Steps to Mitigate Database Security Worst Practices

    - by Troy Kitch
    The recent Top 6 Database Security Worst Practices webcast revealed the Top 6, and a bonus 7th , database security worst practices: Privileged user "all access pass" Allow application bypass Minimal and inconsistent monitoring/auditing Not securing application data from OS-level user No SQL injection defense Sensitive data in non-production environments Not securing complete database environment These practices are uncovered in the 2010 IOUG Data Security Survey. As part of the webcast we looked at each one of these practices and how you can mitigate them with the Oracle Defense-in-Depth approach to database security. There's a lot of additional information to glean from the webcast, so I encourage you to check it out here and see how your organization measures up.

    Read the article

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • Offloading (Some) EBS 12 Reporting to Active Data Guard Instances

    - by Steven Chan
    For most Oracle Database users, Oracle Active Data Guard allows users to:Create a physical standby database for business continuity and disaster recoveryOffload reporting from the production database to the read-only physical standby databaseE-Business Suite customers have been able to use Active Data Guard to create physical standby databases for their EBS environments since the feature was introduced with the 11g Database.  EBS sysadmins can use the generic Active Data Guard documentation to take advantage of the Active Data Guard standby database capabilities.  I am pleased to announce that it is now possible to offload a subset of some ReportWriter-based reports -- but not all -- from a production EBS environment to an Active Data Guard physical standby database.  But before I go into the details of this newly-certified configuration, it's necessary to understand some details about what happens whenever someone attempts to access the E-Business Suite.

    Read the article

  • Faq and Best tips Regarding Learning Database ?

    - by AdityaGameProgrammer
    For a programmer with no prior exposure to databases What would be a good database to learn Oracle vs SQLserver vs MySQLvs PostgreSQL? I have come across lot of discussion MySQL and PostgreSQL and frankly I am confused on which to start with. Are these very different, in the sense if one had to switch, would the exposure to one be counter-productive to learning the other? Is working with databases heavily platform dependent? What exactly do people mean by Data base programming vs. administration? Do people chose databases based on the programming language used for the application developed? In general, Working with databases is it implicit that we work with some server? Does the choice of databases differ when it comes to game development? If so what factors does it differ by? What are the Best Tips that you have found to be useful when learning databases Edit: Some FAQ i had and found the same on SO What should every developer know about databases? Which database if learning from scratch in 2010? For a beginner, is there much difference between MySQL and PostgreSQL What RDBMS should I learn/use? (MySql/SQL Server/Oracle etc.) To what extent should a developer learn database? How are database programmers different from other programmers? what kind of database are used in games?

    Read the article

  • Protecting offline IRM rights and the error "Unable to Connect to Offline database"

    - by Simon Thorpe
    One of the most common problems I get asked about Oracle IRM is in relation to the error message "Unable to Connect to Offline database". This error message is a result of how Oracle IRM is protecting the cached rights on the local machine and if that cache has become invalid in anyway, this error is thrown. Offline rights and security First we need to understand how Oracle IRM handles offline use. The way it is implemented is one of the main reasons why Oracle IRM is the leading document security solution and demonstrates our methodology to ensure that solutions address both security and usability and puts the balance of these two in your control. Each classification has a set of predefined roles that the manager of the classification can assign to users. Each role has an offline period which determines the amount of time a user can access content without having to communicate with the IRM server. By default for the context model, which is the classification system that ships out of the box with Oracle IRM, the offline period for each role is 3 days. This is easily changed however and can be as low as under an hour to as long as years. It is also possible to switch off the ability to access content offline which can be useful when content is very sensitive and requires a tight leash. So when a user is online, transparently in the background, the Oracle IRM Desktop communicates with the server and updates the users rights and offline periods. This transparent synchronization period is determined by the server and communicated to all IRM Desktops and allows for users rights to be kept up to date without their intervention. This allows us to support some very important scenarios which are key to a successful IRM solution. A user doesn't have to make any decision when going offline, they simply unplug their laptop and they already have their offline periods synchronized to the maximum values. Any solution that requires a user to make a decision at the point of going offline isn't going to work because people forget to do this and will therefore be unable to legitimately access their content offline. If your rights change to REMOVE your access to content, this also happens in the background. This is very useful when someone has an offline duration of a week and they happen to make a connection to the internet 3 days into that offline period, the Oracle IRM Desktop detects this online state and automatically updates all rights for the user. This means the business risk is reduced when setting long offline periods, because of the daily transparent sync, you can reflect changes as soon as the user is online. Of course, if they choose not to come online at all during that week offline period, you cannot effect change, but you take that risk in giving the 7 day offline period in the first place. If you are added to a NEW classification during the day, this will automatically be synchronized without the user even having to open a piece of content secured against that classification. This is very important, consider the scenario where a senior executive downloads all their email but doesn't open any of it. Disconnects the laptop and then gets on a plane. During the flight they attempt to open a document attached to a downloaded email which has been secured against an IRM classification the user was not even aware they had access to. Because their new role in this classification was automatically synchronized their experience is a good one and the document opens. More information on how the Oracle IRM classification model works can be found in this article by Martin Abrahams. So what about problems accessing the offline rights database? So onto the core issue... when these rights are cached to your machine they are stored in an encrypted database. The encryption of this offline database is keyed to the instance of the installation of the IRM Desktop and the Windows user account. Why? Well what you do not want to happen is for someone to get their rights for content and then copy these files across hundreds of other machines, therefore getting access to sensitive content across many environments. The IRM server has a setting which controls how many times you can cache these rights on unique machines. This is because people typically access IRM content on more than one computer. Their work desktop, a laptop and often a home computer. So Oracle IRM allows for the usability of caching rights on more than one computer whilst retaining strong security over this cache. So what happens if these files are corrupted in someway? That's when you will see the error, Unable to Connect to Offline database. The most common instance of seeing this is when you are using virtual machines and copy them from one computer to the next. The virtual machine software, VMWare Workstation for example, makes changes to the unique information of that virtual machine and as such invalidates the offline database. How do you solve the problem? Resolution is however simple. You just delete all of the offline database files on the machine and they will be recreated with working encryption when the Oracle IRM Desktop next starts. However this does mean that the IRM server will think you have your rights cached to more than one computer and you will need to rerequest your rights, even though you are only going to be accessing them on one. Because it still thinks the old cache is valid. So be aware, it is good practice to increase the server limit from the default of 1 to say 3 or 4. This is done using the Enterprise Manager instance of IRM. So to delete these offline files I have a simple .bat file you can use; Download DeleteOfflineDBs.bat Note that this uses pskillto stop the irmBackground.exe from running. This is part of the IRM Desktop and holds open a lock to the offline database. Either kill this from task manager or use pskillas part of the script.

    Read the article

  • Letöltheto az Oracle Database Firewall 5.0

    - by Lajos Sárecz
    2010 május 20-án jelentettük be, hogy megvettük az adatbázis tuzfal megoldást fejleszto Secerno céget. Azóta viszonylag keveset lehetett hallani errol a termékrol, idehaza egyedül az oszi ITBN konferencián tartott róla eloadást Stuart Sharp szuk fél órában. Ráadásul a felvásárlás óta a terméket sem lehetett megvásárolni, hiszen a merge után folyó fejlesztések még nem voltak készen. Január 11. óta azonban letötlheto az Oracle Database Firewall 5.0 telepítoje az Oracle edelivery oldaláról az Oracle Database Product Pack-en belül Linux x86 platformra. A Database Firewall az adatbázis védelem elso vonalának tekintheto. Valós idoben monitorozza az adatbázis aktivitását a hálózaton. SQL nyelvi elemzojével rendkívül pontosan képes detektálni a külso és belso támadásokat, a jogosultatlanul, támadó szándékkal végrehajtott tranzakciókat. Az SQL nyelvi elemzojének kifinomultsága lehetové teszi a szurés közel 100%-os pontosságát és megbízhatóságát, ami azért rendkívül fontos, mert nem elég minden támadó tranzakciót kiszurni, de fontos hogy a normál üzletmenetnek megfelelo tranzakciók közül egyet se szurjön, hiszen az is komoly üzleti károkat okozhat. Az adatbázis tuzfalról több részletet tudhat meg mindenki, aki regisztrál és ellátogat a január 27-i Oracle Security Summit rendezvényünkre, ahol a tervek szerint ismét Stuart Sharp tart majd eloadást, viszont ezúttal 1 órában sokkal több részletet tud megosztani a magyar ügyfelekkel és partnerekkel. A Database Firewall eloadást megelozoen egyébként én tartok egy kb. félórás áttekintést az Oracle Database biztonsági megoldásairól.

    Read the article

  • Share laptop's Wi-Fi with a LAN connection and a mobile device

    - by xperator
    OS: Windows 7 64bit I want to share my laptop's internet connection between my PC and my Android device. But I can only do one of them at a time. The laptop is connected to internet wirelessly. The PC is connected to the laptop using a Ethernet cable and internet is shared between them. I want to connect my mobile device to my laptop by making the laptop into a Wi-Fi hotspot. PC (Ethernet) == Laptop (connected to net by Wi-Fi) <== Mobile device (Wi-Fi hotspot) I have 3 connections in my laptop: Wireless Network Connection (internet - shared) Local area connection (PC) Wireless Network Connection 2 (Wi-Fi hotspot) Every time I have to disable either the LAN to get the Wi-Fi hotspot working, or disable the Wi-Fi hotspot to get LAN working. How can I share so I can use both at the same time?

    Read the article

  • Internet Connection Sharing, can't Share Wireless

    - by GuyNoir
    I'm using Windows XP, and I've been trying to setup my laptop so that I can connect to the internet connection that I get on the laptop through my mobile on an ad-hoc network. I've set up an ad-hoc network, but when I try to select "allow other users to connect through this computers internet connection", the only options I have are the Local Area Connections. The tutorial I've been using says that Wireless Connection should be in that pull down menu. Any help?

    Read the article

  • Passing data from one database to another database table (Access) (C#)

    - by SAMIR BHOGAYTA
    string conString = "Provider=Microsoft.Jet.OLEDB.4.0 ;Data Source=Backup.mdb;Jet OLEDB:Database Password=12345"; OleDbConnection dbconn = new OleDbConnection(); OleDbDataAdapter dAdapter = new OleDbDataAdapter(); OleDbCommand dbcommand = new OleDbCommand(); try { if (dbconn.State == ConnectionState.Closed) dbconn.Open(); string selQuery = "INSERT INTO [Master] SELECT * FROM [MS Access;DATABASE="+ "\\Data.mdb" + ";].[Master]"; dbcommand.CommandText = selQuery; dbcommand.CommandType = CommandType.Text; dbcommand.Connection = dbconn; int result = dbcommand.ExecuteNonQuery(); } catch(Exception ex) {}

    Read the article

  • MAA a Database Machine-nel, maximális rendelkezésre állás

    - by Fekete Zoltán
    Néhány napja jelent meg egy, a maximális rendelkezésre állást boncolgató Oracle fehérpapír :): Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine. Ez a dokumentum az Exadata környezetben az Oracle Data Guard használatát elemzi. Az utolsó oldalakon néhány rendkívül hasznos linket is találunk. Mire is használható a Data Guard? - katasztrófa helyzet kezelése - adatbázis gördülo upgrade - egy megoldás az Exadata környezetre migrálásra - a standby adatbázis kihasználása A Sun Oracle Database Machine háromféle konfigurációban kapható: Full Rack, Half Rack és Quarter Rack, azaz teljes, fél és negyed szekrény kiépítésben. Felfelé upgrade-elheto és akár sok Full Rack összekapcsolva is egyetlen gépként tud muködni. A határ tehát a csillagos ég! :) Hiszen a nap a legfontosabb csillagunk. A Database Machine már önmagában is magas rendelkezésreállást biztosít, hiszen minden - a muködés szempontjából fontos - minden komponense legalább duplikált! Természetesen ez az adatokra is vonatkozik. A Database Machine ideális gyors környezet mind OLTP, mind DW futtatására, mind adatbázis konszolidációra. A tranzakciós (OLTP) rendszereknél régóta fontos követelmény, hogy az elsodleges site mögött legyen egy katasztrófa site, mely át tudja venni az adatbázis-kezelés feladatát, ha árvíz, tuz, vagy más szomorú katasztrófa történne az elsodleges site-on. Manapság már az adattárházak (DW) üzemeltetésében is fontos szerepet kap az MAA architektúra, azaz a Maximum Availability Architecture. Innen letöltheto a pdf: Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine.

    Read the article

  • Minimal set of critical database operations

    - by Juan Carlos Coto
    In designing the data layer code for an application, I'm trying to determine if there is a minimal set of database operations (both single and combined) that are essential for proper application function (i.e. the database is left in an expected state after every data access call). Is there a way to determine the minimal set of database operations (functions, transactions, etc.) that are critical for an application to function correctly? How do I find it? Thanks very much!

    Read the article

  • Internet Connection losing

    - by Mehdi Golchin
    When my internet connection is idle for a while about 5 mins, the connection will be lost. It merely happens when the connection is idle, not during download. I'm using an ADSL service by a HUAWEI smartAX MT882a ADSL modem. Any help will be appreciated

    Read the article

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • Oracle Database 11g Underground Advice for Database Administrators, by April C. Sims

    - by alejandro.vargas
    Recently I received a request to review the book "Oracle Database 11g Underground Advice for Database Administrators" by April C. Sims I was happy to have the opportunity know some details about the author, she is an active contributor to the Oracle DBA community, through her blog "Oracle High Availability" . The book is a serious and interesting work, I think it provides a good study and reference guide for DBA's that want to understand and implement highly available environments. She starts walking over the more general aspects and skills required by a DBA and then goes on explaining the steps required to implement Data Guard, using RMAN, upgrading to 11g, etc.

    Read the article

  • dependency analysis from C# code thru to database tables/columns

    - by fpdave
    I'm looking for a tool to do system wide dependency analysis in C# code and SQL-Server databases. Its looking like the only tool available that does this might be CAST (cast software), which is expensive and it does lots more besides that I dont really need. c# code thru to database column dependency would be hugely useful for many reasons, including: - determining effects of database changes throughout the system - seeing hot spots in the database schema - finding dead stored procedures/tables/etc - understanding the existing code base does anyone know of any such tools?

    Read the article

  • Database Documentation - Lands of Trolls: Why and How?

    When database documentation is mentioned in an IT Department, everybody nods wisely, yet everyone does their best to avoid doing it. Attention to the database documentation can be the best invertment in time a development group can make. It is essential, and no system can be properly maintained without it. Feodor gives a sensible explanation and guideline for the unloved task of creating database documentation.

    Read the article

  • Oracle Database 11g Helps Control Exponential Data Growth

    - by [email protected]
    The 2010 ESG annual customer survey is now available. As part of it, ESG interviewed 300 customers about their IT priorities and, unsurprisingly, "Manage Data Growth" is top of the list. Perhaps less self-evident is the proposed solution to target this prime concern: "Often overlooked because it is a database platform, Oracle Database 11g offers additional capabilities such as automatic storage management (ASM), advanced data compression, and data protection that make managing data growth much easier for organizations of any size." The paper goes on to discuss these capabilities and highlights their potential benefits. Oracle Database 11g Helps Control Exponential Database Growth - a worthwhile read for anyone having to deal with rapidly increasing amounts of data. Download your free copy here.

    Read the article

  • Announcing Oracle Audit Vault and Database Firewall

    - by Troy Kitch
    Today, Oracle announced the new Oracle Audit Vault and Database Firewall product, which unifies database activity monitoring and audit data analysis in one solution. This new product expands protection beyond Oracle and third party databases with support for auditing the operating system, directories and custom sources. Here are some of the key features of Oracle Audit Vault and Database Firewall: Single Administrator Console Default Reports Out-of-the-Box Compliance Reporting Report with Data from Multiple Source Types Audit Stored Procedure Calls - Not Visible on the Network Extensive Audit Details Blocking SQL Injection Attacks Powerful Alerting Filter Conditions To learn more about the new features in Oracle Audit Vault and Database Firewall, watch the on-demand webcast.

    Read the article

  • Stairway to Database Source Control Level 2: Getting a Database into Source Control

    In this level, we're going to continue the philosophy of learning by example, and get a database into our SVN repository. We will also consider our overall approach to source control for databases, and the manner in which our team will develop these databases, concurrently. 24% of devs don’t use database source control – make sure you aren’t one of themVersion control is standard for application code, but databases haven’t caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out…

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >