Search Results

Search found 10028 results on 402 pages for 'berkeley db'.

Page 82/402 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • NIS user not being added to NIS group

    - by Brian
    I have set up a NIS server and several NIS clients. I have a user and a group on the NIS server like so: /etc/passwd: myself:x:5000:5000:,,,:/home/myself:/bin/bash /etc/group: fishy:x:3001:otheruser,etc,myself,moreppl I imported the users and groups on the NIS client by adding +:::::: to /etc/passwd and +::: to /etc/group. I can log in to the NIS client, but when I run groups, fishy is not listed. But getent group fishy shows that it was imported correctly and lists me as a member. And if I do sudo su - myself, then suddenly groups says I am in the group! I also had nscd installed, and the groups worked correctly for a while. It seemed like after being logged in for a while, I would silently be dropped out of the group. If I restarted nscd and logged in again, then the groups worked correctly...for a while. There are no UID or GID conflicts with local users or groups. Update: Contents of /etc/nsswitch.conf: passwd: compat group: compat shadow: compat hosts: files nis dns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis aliases: nis files

    Read the article

  • PostgreSQL failover cluster on Windows Server

    - by user36997
    We are looking for advice on how to setup a basic failover cluster for our application: We will be using 4 machines running Microsoft Windows Server (most probably 2003). All four will always run our application, which is essentially a web service. Load balancing is "outsourced" - somebody else handles the distribution of the web requests among the servers. Only one of the servers will be running the PostgreSQL server actively at any given time. Another server (of the four) also has the DB installed, but is on standby/passive. The DB data is stored on shared storage. No copying data between servers. Reads are done very frequently by many end-users, and in rather small chunks of data. Writes are done much less frequently, by less users, and in very large bulks of data. Now, how can one configure Microsoft Cluster Service to keep only one instance of the DB server and 4 instances (1 per server) of our application at all times? And does PostgreSQL integrate neatly with MSCS at all? Update: Instead of keeping the data on shared storage, I also consider using log shipping to replicate data on a couple of DB servers. There are two issues with this option: Log shipping only makes sure that I have a second server that gets all of the data and is ready to take over. How do I implement the actual failure detection and failover switch? Switching back: Suppose the master fails and the system automatically fails over to the slave, and later the master comes back online. I understand that with WAL shipping this will require to reconfigure the log shipping once again, and that switching back is far from seamless. Is that so?

    Read the article

  • hosts file seems to be ignored

    - by z4y4ts
    I have almost fresh Ubuntu desktop box. OS was installed two weeks ago and updated from karmic repositories. Last week I had no problems with DNS. But this week something had changed. I'm not sure what and when, and not sure whether I changed any configs. So now I have some really weird situation. According to logs name resolving should work normally. /etc/hosts 127.0.0.1 localhost test 127.0.1.1 desktop /etc/host.conf order hosts,bind multi on /etc/resolv.conf # Generated by NetworkManager search search servers obtained via DHCP nameserver 192.168.0.3 /etc/nsswitch.conf passwd: compat group: compat shadow: compat hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis But if fact it is not. user@test ~ping test PING localhost (127.0.0.1) 56(84) bytes of data. [skip] Pinging is ok. user@test ~host test test.mydomain.com has address xx.xxx.161.201 But pure I suspect that NetworkManager might cause this misbehavior, but don't know where to start to check it. Any thoughts, suggestions?

    Read the article

  • Why Ubuntu could treat hosts file so strange?

    - by z4y4ts
    I have almost fresh Ubuntu desktop box. OS was installed two weeks ago and updated from karmic repositories. Last week I had no problems with DNS. But this week something had changed. I'm not sure what and when, and not sure whether I changed any configs. So now I have some really weird situation. According to logs name resolving should work normally. /etc/hosts 127.0.0.1 localhost test 127.0.1.1 desktop /etc/host.conf order hosts,bind multi on /etc/resolv.conf # Generated by NetworkManager search search servers obtained via DHCP nameserver 192.168.0.3 /etc/nsswitch.conf passwd: compat group: compat shadow: compat hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis But if fact it is not. user@test ~ping test PING localhost (127.0.0.1) 56(84) bytes of data. [skip] Pinging is ok. user@test ~host test test.myviacube.com has address xx.xxx.161.201 But pure I suspect that NetworkManager might cause this misbehavior, but don't know where to start to check it. Any thoughts, suggestions?

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • Changing Corosync/Heartbeat pair's active node based on MySQL/Galera cluster state

    - by Hace
    Background I'm planning on building a High Availability "cluster" for our Zabbix instance by placing two physical servers in one server room and two in another server room. In each server room one of the physical servers will run Zabbix on RHEL and the other will run Zabbix's MySQL database, also on RHEL. I'd prefer synchronous replication for the MySQL nodes so I'm planning on using Galera in a master-slave configuration. The Zabbix instances on the two Zabbix servers would be controlled by Heartbeat/Corosync (although Red Hat Cluster Suite is also an option...) If the Zabbix server in Server Room A goes down, the one in Server Room B becomes active (and vice versa). Ditto for the MySQL servers/instances. If either of those cases happen, however, the connection between the Zabbix server and the MySQL server becomes significantly slower as ti has to travel over WAN. Question Is it possible to configure the Heartbeat/CoroSync pair to instruct the MySQL/Galera cluster to change the master node to switch to (if available) the one that's in the server room as the active Heartbeat/Corosync -node and (more challengingly) is it possible to do the same in the other direction, i.e have the Galera cluster change the active Heartbeat/CoroSync server to be in the same room as the active MySQL master server in case of a failover in over to avoid unnecessary WAN transfers between the application and its DB? Theories Most likely I can get CoroSync to run something that'd log in to one of the DB nodes to change the MySQL/Galera master but I don't know if it's really possible to do anything similar in the other direction in Galera. Is it possible to define a "service" in CoroSync/Heartbeat so that both the service and its MySQL service would migrate as one if possible. Using the DB server that's behind WAN should still be a better option to DB downtime. Am I just using too many tools to solve a problem that'd be far simpler with something else?

    Read the article

  • What does this diagnostic output mean?

    - by ChrisF
    I recently had a fault with my broadband connection. It turned out to be a fault with the ISP's or teleco's equipment. My ISP posted this diagnostic, but while I understand it in general, I'd like to to know more about the details. I'm assuming that ATM means Asynchronous Transfer Mode and PPP means Point to Point Protocol. It was this that my router was indicating as the fault. xDSL Status Test Summary Sync Status: Circuit In Sync General Information NTE Status: NTE Power Status: Unknown Bypass Status: Upstream DSL Link Information Downstream DSL Link Information Loop Loss: 9.0 17.0 SNR Margin: 25 15 Errored Seconds: 0 0 HEC Errors: 0 Cell Count: 0 0 Speed: 448 8128 TAM Status: Successfully executed operation Network Test: Sub-Test Results Layer Name Value Status Modem pass Transmitter Power (Upstream) 12.4 dBm Transmitter Power (Downstream) 8.8 dBm Upstream psd -38 dBm/Hz Downstream psd -51 dBm/Hz DSL pass Equipment Vendor Name TSTC Equipment Vendor Id n/a Equipment Vendor Revision n/a Training Time 8 s Num Syncs 1 Upstream bit rate 448 kbps Downstream bit rate 8128 kbps Upstream maximum bit rate 1108 kbps Downstream maximum bit rate 11744 kbps Upstream Attenuation 3.5 dB Downstream Attenuation 0.0 dB Upstream Noise Margin 20.0 dB Downstream Noise Margin 19.0 dB Local CRC Errors 0 Remote CRC Errors 0 Up Data Path interleaved Down Data Path interleaved Standard Used G_DMT INP INP Upstream Symbols n/a INP Upstream Delay 4 ms INP Upstream Depth 4 INP Downstream Symbols n/a INP Downstream Delay 5 ms INP Downstream Depth 32 ATM Reason: No ATM cells received fail Number of cells transmitted 30 Number of cells received 0 number of Near end HEC errors 0 number of Far end HEC errors n/a PPP Reason: No response from peer fail PAP authentication nottested CHAP authentication nottested (I'm not sure that Super User is the best place to ask this, but two people have suggested I ask it here so here I am).

    Read the article

  • Which upgrade path for disk IO bound postgres server?

    - by user41679
    Hi all, We currently have a Sun x4270 with 2xquad core Xeon Nehalmen 2.93ghz cores (16 threads), 72 gig of ram and 16 x 10k SAS disks split between the os raid 1, a partition for the Write Ahead Logs which is raid 10 and a partition for the database tables and indexes which is also raid 10, all xfs. I'm currently evaluating which path to go down in terms of upgrades. We'll be sharding the DB at some point soon, but for now I need to focus on hardware upgrades specifically. The machine is not CPU or memory bound at all at the moment, just IOWait is become an issue. The machine is mostly write access as we have a heavy caching layer. We're seeing about 300 write IOPS average on both the database partitions. We don't have any additional storage infrastructure like a Fiber Channel or ISCSI network. Budget isn't too much of a concern, something inline with the size of this server (i.e no $1m IBM machines) Space is ok on the DB side of things, we're running out obviously but there's also some reduction we can do. Additional space would be good though. My current thoughts are either: * ISCSI SAN, possible with 10Gbit network that has solid state acceleration. * FusionIO card / Sun F20 card (will the FusionIO card work in the Sun box? * DAS shelf (something like this http://www.broadberry.co.uk/das-direct-attached-storage-servers/cyberstore-224s-das) which a combination of 15k sas disks and some Intel X25-E drives for DB indexes etc) what would I need to put in the x4270 to add a DAS shelf? I think it's a SAS HBA card, do I have to use Sun's own card or will any PCI Express card work? Anything else??? what would you guys do from your experience? I appreciate it's a lot of questions, but I haven't expanded a DB machine for a number of years and the landscape has changed dramatically since then! Any advice or feedback would be very much appreciated. Let me know if there's anything else I can clarify. Thanks in advance!

    Read the article

  • Best Practices for adding Exchange Archive to current 3 server setup

    - by ADquestion
    I'm looking to add an Archive Database (which I know is just a Mailbox Database) to our current Exchange 2010 environment. I have done this in the past at a previous job, but we had a simpler setup than at this current job. I've been trying to find some best practices to make sure it's setup in an ideal way, but so far not finding the details I would prefer. Hoping someone on here can give me a few pointers. Currently we have a 3 server setup, Server1, Server2 and Server3. Three databases of course, DB1, DB2 and DB3. We have a DAG setup between them. Server1 has DB1 and DB3 on it, DB1 is not active, DB3 is active. Server2 has DB1 and DB2 on it, both are active. Server3 has DB2 and DB3 on it, both are not active. All three servers are virtual (VMware). Each one is setup identical to the other as follows: C:\ 60GB - OS E:\ 600GB - DB (currently only 90GB used, pointing to Datastore just for Server2) F:\ 200GB - Log (2GB used, pointing to same Datastore as above) G:\ 200GB - Restore (0 used, pointing to same Datastore as above) The drives are all set to Thin Provisioning, and it looks as though I have 600GB of available space. They have not been on Exchange that long and only have about 70GB worth of PSTs to import back in that will be going to the Archive Database, plus anything older than 2 years from their current inbox that will be moved into there. I was considering placing the Archive DB on the E:\ drive of Server3 (only) like the current DB, but wasn't sure if that was acceptable. I don't plan on setting the Archive DB up with the DAG, just plan on having it as a single repository for older emails and manually back it up every now and then. If anyone has any suggestions on this I would appreciate it the input. I've done it on a slightly smaller scale before and it worked well, but like to think it through before pulling the trigger, especially at a new job. :) Thanks again!

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • ADSL2+ - High sync-rate, good line attenuation, but low noise margin and slow speeds

    - by Mark Pim
    I've been with my ISP (IdNet) for a few months and have been getting some good speeds, but in the last week the speed has dramatically decreased (from 15 Mbps+ to around 0.2 Mbps). This happens at all times of day, not just peak periods. Obviously I've done all I can to isolate problems my end - only one PC is connected to the router (via ethernet cable) and no other background programs are using the network etc. I've raised the issue with the ISP and they've suggested trying a new ADSL filter to see if that is casuing the problem, but I thought it would also be good to get the opinion of superuser on possible causes or other troubleshooting I can do. Here are the juicy stats :) My router (Netgear DGN1000) reports: Downstream Upstream Connection Speed 17602 kbps 1062 kbps Line Attenuation 17.9 db 8.6 db Noise Margin 6.0 db 6.1 db I used RouterStats and it seems to show those figures stay fairly consistent all the time I ran the BT speedtest and it reported: download speed of 164 kbps, out of a max achievable of 21000 kbps upload speed of 859 kbps, out of 1048 kbps DSL connection rate 17719 kbps down and 1048 kbps up IP Profile of 15000 kbps Is there any more troubleshooting I can do? Does this look like a problem with my equipment / wiring or with BT's line? Any advice would be great :)

    Read the article

  • Exchange 2010 SP2 database size

    - by Chad
    I have a single Exchange 2010 sp2 environment with 3 DB stores. I am trying to reduce the sizes by moving the mailboxes to a spare DB and then deleting the empty database. I cleaned up the users mailboxes to reduce the sizes and set the retention periods to 1 day each and waited several days before moving mailboxes. The databases are backing up fine and clearing logs files but when I move the mailboxes I noticed they were taking a long time, even though some were less than 100MB. When I checked the new database size it seems like the orginal mailbox size might be moving (1GB instead of 100MB). Exchange is showing the expected smaller mailbox sizes when I run get-mailbox statistics against the DB. So if I have 5 mailboxes 100MB each it is showing like 3GB instead of around 500MB, and no whitespace. I keep waiting thinking mailby the retention period is not expired yet but it is much longer than 1 day already. I am setting them both to 0 today to see if that works. What am I missing to get the combined mailbox sizes to match the DB size minus whitespace?

    Read the article

  • mysqldump --where with = operator doesn't get all rows = - Help!

    - by JonathanLIVE
    I have a situation with a particular table that now thinks it contains 4 Petabytes of data. I know that sounds cool, but I assure you, it is only on a 60GB partition. This table has 9 fields in it. One of them is a domain_id field. It is the best field to identify the rows by, as there are only approximately 6300 of them. The only other field option to match has over 2million records, and thats just more difficult. I cannot do a straight mysqldump because it will attempt to output all 4PB of data and fill the drive long before it gets close to that, so I need to surgically remove the good stuff, destroy the db, and recreate it. I believe if I can do a dump for each domain_id record, then I will get most of the usable data out of it. This is what I am trying to use: mysqldump -u root --skip-opt -q --no-create-info --skip-add-drop-table --max_allowed_packet=1000000000 database table --where="domain_id=10" domains10.sql Using this I expect every row with the domain_id 10 to be exported. However, when I check the export, I am only getting 1 row, when however I look at the db, there are many many rows. It is as though the operator just finds one, then gives up. I have tried various operators. Using the < or I am able to get more of the data, but the export stops short at certain rows where the data has been compromised. With over 6000 to go through, I can't narrow down which rows are being affected in the export easily enough. So, what I need is an operator that will basically do what I thought = would do, simply give me an export of all records that match the specific field. Also note, the only way I got this DB even accessible is through an innodb force recovery 3. So I need to get this right, because after this is done, I have to drop the db in order to make mysql functional again. Looking forward to any helpful answers.

    Read the article

  • BIND9 Forwarding by view

    - by Triztian
    Hi I think this is a simple issue, I'd like to forward only to certain IPs in the LAN network, for example I have 2 acl lists: acl "office1" { 192.168.1.15; // With internet access }; acl "production" { 192.168.1.101; // No internet access }; I know that there probably should be more efficient ways to restrict internet access, but at the moment this is what I'd like to try.Here's what I've tried in named.conf.local // Inlcude my acl definitions include "/etc/bind/acls.conf"; view "no-internet" { match-clients { production; }; include "/etc/bind/named.conf.default-zones"; zone "localdomain.com" { type master; file "/etc/bind/db.localdomain.com"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/db.192.168.1"; }; } view "internet" { match-clients { office1; }; include "/etc/bind/named.conf.default-zones"; forwarders { 201.56.59.14; // Made Up 201.56.59.15; // Made Up }; zone "localdomain.com" { type master; file "/etc/bind/db.localdomain.com"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/db.192.168.1"; }; }; As you can see I want a localdomain.com defined for every computer in my network and forward internet access to the computers in the office but not to the ones on the production floor. I've modified my conf file, however the IP in the "no-internet" acl is able to resolve the domains, even though I've rebooted the computer, flushed the DNS using ipconfig /flushdns and set my DNS Server as the only one, why is this still happening? Thanks in advance.

    Read the article

  • what is best multi-server configuration with OpenVPN

    - by sebut
    We have a number of Database severs running MongoDB on Debian plus a number of Application servers also on Debian. The db servers hold replicating db clusters, so they need to talk to each other. Application servers need to talk to all db servers (for reasons of fault tolerance). The servers are potentially spread across multiple hosting centers, so we need secure channels between all servers. The number of servers is bound to grow, so we need a VPN solution that's easy to maintain and expand. This is why I feel that SSH that we use for testing might not be up to the task and OpenVPN seems the way to go. I have ruled out TAP, since I understand that this would mean all traffic going to all the servers - perhaps this is a misunderstanding and TAP acts more like a switch? With TUN devices I imagine that all DB servers would live in their own separate subnet, they would also need a client configured to be able to connect to each of their peers. The application servers could live in a common subnet range with a client config only. Does this sound like a reasonable setup? Strangely, on the web I did not find anything about multi-server with OpenVPN. Thanks for all insights and ideas!

    Read the article

  • ISACA Webcast follow up: Managing High Risk Access and Compliance with a Platform Approach to Privileged Account Management

    - by Darin Pendergraft
    Last week we presented how Oracle Privileged Account Manager (OPAM) could be used to manage high risk, privileged accounts.  If you missed the webcast, here is a link to the replay: ISACA replay archive (NOTE: you will need to use Internet Explorer to view the archive) For those of you that did join us on the call, you will know that I only had a little bit of time for Q&A, and was only able to answer a few of the questions that came in.  So I wanted to devote this blog to answering the outstanding questions.  Here they are. 1. Can OPAM track admin or DBA activity details during a password check-out session? Oracle Audit Vault is monitoring these activities which can be correlated to check-out events. 2. How would OPAM handle simultaneous requests? OPAM can be configured to allow for shared passwords.  By default sharing is turned off. 3. How long are the passwords valid?  Are the admins required to manually check them in? Password expiration can be configured and set in the password policy according to your corporate standards.  You can specify if you want forced check-in or not. 4. Can 2-factor authentication be used with OPAM? Yes - 2-factor integration with OPAM is provided by integration with Oracle Access Manager, and Oracle Adaptive Access Manager. 5. How do you control access to OPAM to ensure that OPAM admins don't override the functionality to access privileged accounts? OPAM provides separation of duties by using Admin Roles to manage access to targets and privileged accounts and to control which operations admins can perform. 6. How and where are the passwords stored in OPAM? OPAM uses Oracle Platform Security Services (OPSS) Credential Store Framework (CSF) to securely store passwords.  This is the same system used by Oracle Applications. 7. Does OPAM support hierarchical/level based privileges?  Is the log maintained for independent review/audit? Yes. OPAM uses the Fusion Middleware (FMW) Audit Framework to store all OPAM related events in a dedicated audit database.  8. Does OPAM support emergency access in the case where approvers are not available until later? Yes.  OPAM can be configured to release a password under a "break-glass" emergency scenario. 9. Does OPAM work with AIX? Yes supported UNIX version are listed in the "certified component section" of the UNIX connector guide at:http://docs.oracle.com/cd/E22999_01/doc.111/e17694/intro.htm#autoId0 10. Does OPAM integrate with Sun Identity Manager? Yes.  OPAM can be integrated with SIM using the REST  APIs.  OPAM has direct integration with Oracle Identity Manager 11gR2. 11. Is OPAM available today and what does it cost? Yes.  OPAM is available now.  Ask your Oracle Account Manager for pricing. 12. Can OPAM be used in SAP environments? Yes, supported SAP version are listed in the "certified component section" of the SAP  connector guide here: http://docs.oracle.com/cd/E22999_01/doc.111/e25327/intro.htm#autoId0 13. How would this product integrate, if at all, with access to a particular field in the DB that need additional security such as SSN's? OPAM can work with DB Vault and DB Firewall to provide the fine grained access control for databases. 14. Is VM supported? As a deployment platform Oracle VM is supported. For further details about supported Virtualization Technologies see Oracle Fusion Middleware Supported System configurations here: http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html 15. Where did this (OPAM) technology come from? OPAM was built by Oracle Engineering. 16. Are all Linux flavors supported?  How about BSD? BSD is not supported. For supported UNIX version see the "certified component section" of the UNIX connector guide http://docs.oracle.com/cd/E22999_01/doc.111/e17694/intro.htm#autoId0 17. What happens if users don't check passwords in at the end of a work task? In OPAM a time frame can be defined how long a password can be checked out. The security admin can force a check-in at any given time. 18. is MySQL supported? Yes, supported DB version are listed in the "certified component section" of the DB connector guide here: http://docs.oracle.com/cd/E22999_01/doc.111/e28315/intro.htm#BABGJJHA 19. What happens when OPAM crashes and you need to use the password? OPAM can be configured for high availability, but if required, OPAM data can be backed up/recovered.  See the OPAM admin guide. 20. Is OPAM Standalone product or does it leverage other components from IDM? OPAM can be run stand-alone, but will also leverage other IDM components

    Read the article

  • Welcome to the ISV Migration Center (IMC) Team blog

    - by lukasz.romaszewski(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Welcome to the ISV Migration Center (IMC) Team blog.The IMC is a a team of senior Oracle technical consultants who's aim is to enable partners to rapidly and successfully adopt and implement Oracle's latest technology.  The IMC consultants are trained and equipped to deliver leading-edge, enterprise-quality technology solutions. This blog has been created to serve as an  information exchange platform on Oracle Fusion Middleware and Database products so you will find how-tos, articles, demos and other technical resources.  We will also publish our upcoming workshops, webcasts and seminars so make sure you check it regularly to get the latest updates.   Here's our team:Lukasz Romaszewski Java & middleware specialist, 8 years experience in architecting, developing and supporting enterprise solutions based on J2EE and Oracle Database technology. At Oracle from April 2008, working as an IMC Migration Consultant in Oracle Partner Hub in Cracow, Poland. Helping Oracle Partners in migrating their solutions to the latest Oracle Fusion Middleware stack, running hands-on migration workshops and seminars across Europe. Experienced in the following areas and products Oracle Weblogic Application Server 11gApplication Development Framework (ADF)Oracle SOA Suite 11gOracle Forms 6i, 10g and 11gOracle Database (PL/SQL, AQ, XML DB)Java EE 5.0 based architecture Murat Teksoz Oracle DB and DB options - Oracle Linux- Apex- Oracle Business intelligence specilist, 13 years experince in Database managment, Performans Tuning, Diagnosting ,Installation and Configurationg database, Database Security, High Avalibility and Disaster Recovery solutions. Working at Oracle IMC Istanbul from September 2008, delivering partner workshops and seminars in Europe and Central Asia. Experienced in the following areas and products Oracle 9i,10g,11g Database SolutionsOracle Partitioning, Total Recall Advantage compressingOracle High Avalability Solutions - Real Application ClusterOracle Disaster Recovery Solutions - Oracle DataguardOracle Grid ControlOracle LinuxOracle Business intelligence solutions - Oracle Bi 10g-11gMigration Tools (Sqldeveloper) - Migrate from SqlServer,Mysql,Sysbase,Db2 to Oracle DatabaseOracle APEX (Application Express Tool) Vadim Melnikov Oracle Database specialist with DB Options, Linux and virtualization skills. Vadim has more than 8 years experience with Oracle products and is now working as Database consultant in Oracle IMC Moscow as employee of FORS Development center, Russian Oracle Platinum partner. Helping Oracle Partners to migrate solutions to Oracle from other platforms and adopt new oracle technologies, running workshops and seminars. Experienced in the following areas and products Oracle Database 9i,10g,11g Database Solutions (SQL, PL/SQL, Installing, Configuring, Performance Tuning, Diagnosting, Database management)Oracle DB options (Partitioning, Total Recall, Advanced compression)Oracle Enterprise ManagerOracle Enterprise LinuxOracle VM 2 for x86Migration to Oracle DatabaseOracle Application Express Gokhan Gungor Java (J2EE) Lead Developer and Architect. Designed and Developed Web Applications, Middleware Systems/Services, Desktop Applications and Back-end Tools/Services using Java, WebLogic Server, JBoss and Open Source Frameworks. Joined Oracle in 2010 as Fussion middleware consultant in Istanbul IMC , responsible for running migration and adoption workshops and seminars covering Java technology, ADF, WebLogic and SOA and providing technical consultancy for migration projects. Experienced in the following areas and products Oracle WebLogic ServerApplication Development Framework (ADF)JDeveloperJava EE (EJB, JMS, Servlet, JSP, JSF, JavaMail, JTA, JAAS, JSTL, JAXB)Java SE (JavaBeans, JDBC, XML, XSL, RMI, JNDI, JAXP)Oracle Database 10g,11g Dmitry Nefedkin Oracle Middleware & Java specialist, 7+ years experience in developing, designing enterprise solutions based on Oracle Database and Middleware, developing Oracle e-Business Suite customizations, designing integration architecture within the companies . Joined Oracle team in October 2010 as IMC FMW Consultant in Oracle Alliances & Channels in Moscow, Russia. Experienced in the following areas and products Oracle Weblogic Application Server 11gOracle Service Bus 11gOracle SOA Suite 10g (BPEL PM, ESB, OWSM)Oracle Application Server 10gOracle Forms 6i and 9iOracle BI PublisherOracle ADF 10gOracle Database (SQL tuning, PL/SQL, AQ, Streams)Java EE 5 developmentCheck out our web site as well: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} http://www.oracle.com/partners/en/most-popular-resources/027930

    Read the article

  • obiee 10g teradata Solaris deployment

    - by user554629
    I have 3-4 years worth of notes on proper Teradata deployment across multiple operating systems.   The topic that is too large to cover succinctly in a blog entry.   I'm trying something new:  document a specific situation, consolidate the facts, document diagnostic procedures and then clone the structure to cover other obiee deployments (11g and other operating systems). Until the icon below is removed, this blog entry may be revised frequently.  No construction between June 6th through June 25th. Getting started obiee 10g certification:  pg 24-25 Teradata V2R5.1.x - V2R6.2, Client 13.10, certified 10.1.3.4.1obiee 10g documentation: Deployment Guide, Server Administration, Install/Config Guideobiee overview: teradata connectivity downloads: ( requires registration )solaris odbc drivers: sparc 13.10:  Choose 13.10.00.04  ( ReadMe ) sparc 14.00: probably would work, but not certified by Oracle on 10g I assume you have obiee 10.1.3.4.1 installed; 10.1.3.4.2 would be a better choice. Teradata odbc install requires root for Solaris pkgadd Only 1 version of Teradata odbc can be installed.symbolic links to the current version are created in /usr/lib at install obiee implementation background database access has two types of implementation:  native and odbcnative drivers use DB vendor client interfaces for accessodbc drivers are provided by the DB vendor for DB accessTeradata is an odbc interface Database. odbc drivers require an ODBC Driver Managerobiee uses Merant Data Direct driver manager obiee servers communicate with one another using odbc.The internal odbc driver is implemented by the obiee team and requires Merant Driver Manager. Teradata supplies a Driver Manager, which is built by Merant, but should not be used in obiee. The nqsserver shared library deployment looks like this  OBIEE Server<->DataDirect Manager<->Teradata Driver<->Teradata Database nqsserver startup $ cd $BI/setup$ . ./sa-init64.sh$ run-sa.sh autorestart64 The following files are referenced from setup:  .variant.sh  user.sh  NQSConfig.INI  DBFeatures.INI  $ODBCINI ( odbc.ini )  sqlnet.ora How does nqsserver connect to Teradata? A teradata DSN is created in the RPD. ( TD71 )setup/odbc.ini contains: [ODBC Data Sources] TD71=tdata.so[TD71]Driver=/opt/tdodbc/odbc/drivers/tdata.soDescription=Teradata V7.1.0DBCName=###.##.##.### LastUser=Username=northwindPassword=northwindDatabase=DefaultDatabase=northwind setup/user.sh contains LIBPATH\=/opt/tdicu/lib_64\:/usr/odbc/lib\:/usr/odbc/drivers\:/usr/lpp/tdodbc/odbc/drivers\:$LIBPATHexport LIBPATH   setup/.variant.sh contains if [ "$ANA_SERVER_64" = "1" ]; then  ANA_BIN_DIR=${SAROOTDIR}/server/Bin64  ANA_WEB_DIR=${SAROOTDIR}/web/bin64  ANA_ODBC_DIR=${SAROOTDIR}/odbc/lib64         setup/sa-run.sh  contains . ${ANA_INSTALL_DIR}/setup/.variant.sh. ${ANA_INSTALL_DIR}/setup/user.sh logfile="${SAROOTDIR}/server/Log/nqsserver.out.log"${ANA_BIN_DIR}/nqsserver -quiet >> ${logfile} 2>&1 &   nqsserver is running: nqsserver produces $BI/server/nqsserver.logAt startup, the native database drivers connect and record DB versions.tdata.so is not loaded until a Teradata DB connection is attempted.    Teradata odbc client installation Accept all the defaults for pkgadd.   Install in /opt. $ mkdir odbc$ cd odbc$ gzip -dc ../tdodbc__solaris_sparc.13.10.00.04.tar.gz | tar -xf - $ sudo su# pkgadd -d . TeraGSS# pkgadd -d . tdicu1310# pkgadd -d . tdodbc1310   Directory Notes: /opt/teradata/client/13.10/odbc_64/lib/tdata.soThe 64-bit obiee library loaded by nqsserver. /opt/teradata/client/13.10/odbc_64/lib is not needed in LD_LIBRARY_PATH /opt/teradata/client/13.10/tdicu/lib64is needed in LD_LIBRARY_PATH /usr/odbc should not be referenced;  it is a link to 32-bit libraries LD_LIBRARY_PATH_64 should not be used.     Useful bash functions and aliases export SAROOTDIR=/export/home/dw_adm/OracleBIexport TERA_HOME=/opt/teradata/client/13.10 export ORACLE_HOME=/export/home/oracle/product/10.2.0/clientexport ODBCINI=$SAROOTDIR/setup/odbc.iniexport TD_ICU_DATA=$TERA_HOME/tdicu/lib64alias cds="alias | grep '^alias cd' | sed 's/^alias //' | sort"alias cdtd="cd $TERA_HOME; ls" alias cdtdodbc="cd $TERA_HOME/odbc_64; ls -l"alias cdtdicu="cd $TERA_HOME/tdicu/lib64; ls -l"alias cdbi="cd $SAROOTDIR; ls"alias cdbiodbc="cd $SAROOTDIR/odbc; ls -l"alias cdsetup="cd $SAROOTDIR/setup; ls -ltr"alias cdsvr="cd $SAROOTDIR/server; ls"alias cdrep="cd $SAROOTDIR/server/Repository; ls -ltr"alias cdsvrcfg="cd $SAROOTDIR/server/Config; ls -ltr"alias cdsvrlog="cd $SAROOTDIR/server/Log; ls -ltr"alias cdweb="cd $SAROOTDIR/web; ls"alias cdwebconfig="cd $SAROOTDIR/web/config; ls -ltr"alias cdoci="cd $ORACLE_HOME; ls"pkgfiles() { pkgchk -l $1 | awk  '/^Pathname/ {print $2}'; }pkgfind()  { pkginfo | egrep -i $1 ; } Examples: $ pkgfind td$ pkgfiles tdodbc1310 | grep 64$ cds$ cdtdodbc$ cdsetup$ cdsvrlog$ cdweblog

    Read the article

  • Add SQL Azure database to Azure Web Role and persist data with entity framework code first.

    - by MagnusKarlsson
    In my last post I went for a warts n all approach to set up a web role on Azure. In this post I’ll describe how to add an SQL Azure database to the project. This will be described with an as minimal as possible amount of code and screen dumps. All questions are welcome in the comments area. Please don’t email since questions answered in the comments field is made available to other visitors. As an example we will add a comments section to the site we used in the previous post (Länk här). Steps: 1. Create a Comments entity and then use Scaffolding to set up controller and view, and add ConnectionString to web.config. 2. Create SQL Azure database in Management Portal and link the new database 3. Test it online!   1. Right click Models folder, choose add, choose “class…” . Name the Class Comment. 1.1 Replace the Code in the class with the following: using System.Data.Entity; namespace MvcWebRole1.Models { public class Comment {    public int CommentId { get; set; }    public string Name { get; set; }      public string Content { get; set; } } public class CommentsDb : DbContext { public DbSet<Comment> CommentEntries { get; set; } } } Now Entity Framework can create a database and a table named Comment. Build your project to assert there are no build errors.   1.2 Right click Controllers folder, choose add, choose “class…” . Name the Class CommentController and fill out the values as in the example below.     1.3 Click Add. Visual Studio now creates default View for CRUD operations and a Controller adhering to these and opens them. 1.3 Open Web.config and add the following connectionstring in <connectionStrings> node. <add name="CommentsDb” connectionString="data source=(LocalDB)\v11.0;Integrated Security=SSPI;AttachDbFileName=|DataDirectory|\CommentsDb.mdf;Initial Catalog=CommentsDb;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />   1.4 Save All and press F5 to start the application. 1.5 Go to http://127.0.0.1:81/Comments which will redirect you through CommentsController to the Index View which looks like this:     Click Create new. In the Create-view, add name and content and press Create.   1: // 2: // POST: /Comments/Create 3:  4: [HttpPost] 5: public ActionResult Create(Comment comment) 6: { 7: if (ModelState.IsValid) 8: { 9: db.CommentEntries.Add(comment); 10: db.SaveChanges(); 11: return RedirectToAction("Index"); 12: } 13:  14: return View(comment); 15: } 16:    The default View() is Index so that is the View you will come to. Looking like this: 1: // 2: // GET: /Comments/ 3: 4: public ActionResult Index() 5: { 6: return View(db.CommentEntries.ToList()); 7: } Resulting in the following screen dump(success!):   2. Now, go to the Management portal and Create a new db.   2.1 With the new database created. Click the DB icon in the left most menu. Then click the newly created database. Click DASHBOARD in the top menu. Finally click Connections strings in the right menu to get the connection string we need to add in our web.debug.config file.   2.2 Now, take a copy of the connection String earlier added to the web.config and paste in web.debug.conifg in the connectionstrings node. Replace everything within “ “ in the copied connectionstring with that you got from SQL Azure. You will have something like this:   2.3 Rebuild the application, right click the cloud project and choose “Package…” (if you haven’t set up publishing profile which we will do in our next blog post). Remember to choose the right config file, use debug for staging and release for production so your databases won’t collide. You should see something like this:   2.4 Go to Management Portal and click the Web Services menu, choose your service and click update in the bottom menu.   2.5 Link the newly created database to your application. Click the LINKED RESOURCES in the top menu and then click “Link” in the bottom menu. You should get something like this. 3. Alright then. Under the Dashboard you can find the link to your application. Click it to open it in a browser and then go to ~/Comments to try it out just the way we did locally. Success and end of this story!

    Read the article

  • Installing Oracle 11gR2 on RHEL 6.2

    - by Chris
    Hello all I'm having some difficulty installing Oracle 11gR2 on RHEL 6.2 I have compiled a giant list of every single step I have taken so far I installed RHEL 6.2 on VMWARE it did it's easy install automatically I Selected 4gb of memory Selected max size of 80Gb Selected 2 processors Sorry for the bad styling copy paste isn't working correctly The version of oracle i downloaded is Linux x86-64 11.2.0.1 I am installing this on a local machine NOT a remote machine I followed the following documentation http://docs.oracle.com/cd/E11882_01/install.112/e24326/toc.htm I bolded the steps which I was least sure about from my research Easy installed with RHEL 6.2 for VMWARE Registered with red hat so I can get updates Reinstalled vmware-tools by pressing enter at every choice Sudo yum update at the end something about GPG key selected y then y Checked Memory Requirements grep MemTotal /proc/meminfo MemTotal: 3921368 kb uname -m x86_64 grep SwapTotal /proc/meminfo SwapTotal: 6160376 kb free total used free shared buffers cached Mem: 3921368 2032012 1889356 0 76216 1533268 -/+ buffers/cache: 422528 3498840 Swap: 6160376 0 6160376 df -h /dev/shm Filesystem Size Used Avail Use% Mounted on tmpfs 1.9G 276K 1.9G 1% /dev/shm df -h /tmp Filesystem Size Used Avail Use% Mounted on /dev/sda2 73G 2.7G 67G 4% / df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 73G 2.7G 67G 4% / tmpfs 1.9G 276K 1.9G 1% /dev/shm /dev/sda1 291M 58M 219M 21% /boot All looked fine to me except maybe for swap? Software Requirements cat /proc/version Linux version 2.6.32-220.el6.x86_64 ([email protected]) (gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) ) #1 SMP Wed Nov 9 08:03:13 EST 2011 uname -r 2.6.32-220.el6.x86_64 (same as above but whatever) According to the tutorial should be On Red Hat Enterprise Linux 6 2.6.32-71.el6.x86_64 or later These are the versions of software I have installed binutils-2.20.51.0.2-5.28.el6.x86_64 compat-libcap1-1.10-1.x86_64 compat-libstdc++-33-3.2.3-69.el6.x86_64 compat-libstdc++-33.i686 0:3.2.3-69.el6 gcc-4.4.6-3.el6.x86_64 gcc-c++.x86_64 0:4.4.6-3.el6 glibc-2.12-1.47.el6_2.12.x86_64 glibc-2.12-1.47.el6_2.12.i686 glibc-devel-2.12-1.47.el6_2.12.x86_64 glibc-devel.i686 0:2.12-1.47.el6_2.12 ksh.x86_64 0:20100621-12.el6_2.1 libgcc-4.4.6-3.el6.x86_64 libgcc-4.4.6-3.el6.i686 libstdc++-4.4.6-3.el6.x86_64 libstdc++.i686 0:4.4.6-3.el6 libstdc++-devel.i686 0:4.4.6-3.el6 libstdc++-devel-4.4.6-3.el6.x86_64 libaio-0.3.107-10.el6.x86_64 libaio-0.3.107-10.el6.i686 libaio-devel-0.3.107-10.el6.x86_64 libaio-devel-0.3.107-10.el6.i686 make-3.81-19.el6.x86_64 sysstat-9.0.4-18.el6.x86_64 unixODBC-2.2.14-11.el6.x86_64 unixODBC-devel-2.2.14-11.el6.x86_64 unixODBC-devel-2.2.14-11.el6.i686 unixODBC-2.2.14-11.el6.i686 8. Probably screwed up here or step 9 /usr/sbin/groupadd oinstall /usr/sbin/groupadd dba(not sure why this isn't in the tutorial) /usr/sbin/useradd -g oinstall -G dba oracle passwd oracle /sbin/sysctl -a | grep sem Xkernel.sem = 250 32000 32 128 /sbin/sysctl -a | grep shm kernel.shmmax = 68719476736 kernel.shmall = 4294967296 kernel.shmmni = 4096 vm.hugetlb_shm_group = 0 /sbin/sysctl -a | grep file-max Xfs.file-max = 384629 /sbin/sysctl -a | grep ip_local_port_range Xnet.ipv4.ip_local_port_range = 32768 61000 /sbin/sysctl -a | grep rmem_default Xnet.core.rmem_default = 124928 /sbin/sysctl -a | grep rmem_max Xnet.core.rmem_max = 131071 /sbin/sysctl -a | grep wmem_max Xnet.core.wmem_max = 131071 /sbin/sysctl -a | grep wmem_default Xnet.core.wmem_default = 124928 Here is my sysctl.conf file I only added the items that were bigger: Kernel sysctl configuration file for Red Hat Linux # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and sysctl.conf(5) for more details. Controls IP packet forwarding net.ipv4.ip_forward = 0 Controls source route verification net.ipv4.conf.default.rp_filter = 1 Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 Controls whether core dumps will append the PID to the core filename. Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 Controls the default maxmimum size of a mesage queue kernel.msgmax = 65536 Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 /sbin/sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key error: "net.bridge.bridge-nf-call-iptables" is an unknown key error: "net.bridge.bridge-nf-call-arptables" is an unknown key kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 su - oracle ulimit -Sn 1024 ulimit -Hn 1024 ulimit -Su 1024 ulimit -Hu 30482 ulimit -Su 1024 ulimit -Ss 10240 ulimit -Hs unlimited su - nano /etc/security/limits.conf *added to the end of the file * oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240 exit exit su - mkdir -p /app/ chown -R oracle:oinstall /app/ chmod -R 775 /app/ 9. THIS IS PROBABLY WHERE I MESSED UP I then exited out of the root account so now I'm back in my account chris then I su - oracle echo $SHELL /bin/bash umask 0022 (so it should be set already to what is neccesary) Also from what I have read I do not need to set the DISPLAY variable because I'm installing this on the localhost I then opened the .bash_profile of the oracle and changed it to the following .bash_profile Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi User specific environment and startup programs PATH=$PATH:$HOME/bin; export PATH ORACLE_BASE=/app/oracle ORACLE_SID=orcl export ORACLE_BASE ORACLE_SID I then shutdown the virtual machine shared my desktop folder from my windows 7 then turned back on the virtual machine logged in as chris opened up a terminal then: su - for some reason the shared folder didn't appear so I reinstalled vmware tools again and restarted then same as before su - cp -R linux_oracle/database /db; chown -R oracle:oinstall /db; chmod -R 775 /db; ll /db drwxrwxr-x. 8 oracle oinstall 4096 Jun 5 06:20 database exit su - oracle cd /db/database ./runInstaller AND FINALLY THE INFAMOUS JAVA:132 ERROR MESSAGE Starting Oracle Universal Installer... Checking Temp space: must be greater than 80 MB. Actual 65646 MB Passed Checking swap space: must be greater than 150 MB. Actual 6015 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-06-05_06-47-12AM. Please wait ...[oracle@localhost database]$ Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/OraInstall2012-06-05_06-47-12AM/jdk/jre/lib/i386/xawt/libmawt.so: libXext.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1751) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1647) at java.lang.Runtime.load0(Runtime.java:769) at java.lang.System.load(System.java:968) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1751) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1668) at java.lang.Runtime.loadLibrary0(Runtime.java:822) at java.lang.System.loadLibrary(System.java:993) at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:50) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.loadLibraries(Toolkit.java:1509) at java.awt.Toolkit.(Toolkit.java:1530) at com.jgoodies.looks.LookUtils.isLowResolution(Unknown Source) at com.jgoodies.looks.LookUtils.(Unknown Source) at com.jgoodies.looks.plastic.PlasticLookAndFeel.(PlasticLookAndFeel.java:122) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:242) at javax.swing.SwingUtilities.loadSystemClass(SwingUtilities.java:1783) at javax.swing.UIManager.setLookAndFeel(UIManager.java:480) at oracle.install.commons.util.Application.startup(Application.java:758) at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:164) at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:181) at oracle.install.commons.base.driver.common.Installer.startup(Installer.java:265) at oracle.install.ivw.db.driver.DBInstaller.startup(DBInstaller.java:114) at oracle.install.ivw.db.driver.DBInstaller.main(DBInstaller.java:132)

    Read the article

  • DNS resolution problems; dig SERVFAIL error

    - by JustinP
    I'm setting up a couple of dedicated servers, and having problems setting up my nameservers properly. One of these is a LEMP server (LAMP with nginx in place of Apache), and the other will function solely as an email server, running exim/dovecot/ASSP antispam (no Apache). The LEMP server is CentOS 5.5, with no control panel, while the email server is CentOS 5.5 as well, with cPanel/WHM. So, I've had problems getting DNS set up properly. I have two domains, each one pointing to one of these servers. The nameservers are registered correctly with the domain registrar, and the nameserver IPs are entered correctly as well. I've spoken to tech support at the registrar and they confirm that everything is set up on their end. Not knowing much about DNS, I googled nameservers and DNS until I nearly went blind, and spent hours messing with the configuration. Eventually, I got the LEMP server's DNS working properly (no cPanel). Pleased with this triumph, I'm trying to mimic that configuration and repeat the process with the email server, and it's just not happening. The nameserver starts and stops, but the domain doesn't resolve. Things I have tried Going through standard procedures to set up DNS in WHM Clearing all DNS information, uninstalling BIND, then reinstalling all of that and again going through WHM procedures for setting up DNS Clearing all DNS information, and setting up BIND via shell (completely outside of cPanel) by using my config and zone files from the LEMP server as a template named runs just fine, but nothing is resolving. When I "dig any example.com" I get a SERVFAIL message. Nslookups return no information. Here are my config and zone files. named.conf controls { inet 127.0.0.1 allow { localhost; } keys { coretext-key; }; }; options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; allow-query { any; }; allow-query-cache { any; }; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; //zone "." IN { // type hint; // file "/var/named/named.ca"; //}; include "/etc/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above : zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; view "external" { /* This view will contain zones you want to serve only to "external" clients * that have addresses that are not on your directly attached LAN interface subnets: */ match-clients { any; }; match-destinations { any; }; recursion no; // you'd probably want to deny recursion to external clients, so you don't // end up providing free DNS service to all takers allow-query-cache { none; }; // Disable lookups for any cached data and root hints // all views must contain the root hints zone: //include "/etc/named.rfc1912.zones"; zone "." IN { type hint; file "/var/named/named.ca"; }; zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; include "/etc/rndc.key"; db.example.com $TTL 1D ; ; Zone file for example.com ; ; Mandatory minimum for a working domain ; @ IN SOA ns1.example.com. contact.example.com. ( 2011042905 ; serial 8H ; refresh 2H ; retry 4W ; expire 1D ; default_ttl ) NS ns1.example.com. NS ns2.example.com. ns1 A 1.2.3.4 ns2 A 1.2.3.5 example.com. A 1.2.3.4 localhost A 127.0.0.1 www CNAME example.com. mail CNAME example.com. ; db.1.2.3 $TTL 1D $ORIGIN 3.2.1.in-addr.arpa. @ IN SOA ns1.example.com contact.example.com. ( 2011042908 ; 8H ; 2H ; 4W ; 1D ; ) NS ns1.example.com. NS ns2.example.com. 4 PTR hostname.example.com. 5 PTR hostname.example.com. ; Also of note: both of these servers are managed. Tech support is very responsive, and largely useless. Hours go by with them asking me questions to narrow down what could be wrong, then they pass the ticket to the tech on the next shift, who ignores everything that's happened already and spend his whole shift asking all the same questions the last guy asked. So, in summary: *Nameservers, with IPs, are correctly registered with domain registrar *named is configured and running *...and must not be configured correctly, because nothing resolves. Any help would be great. I changed domains and IPs in the files to generics, but let me know if you need to know the domain in question. Thanks! UPDATE I found that I didn't have 127.0.0.1 in /etc/resolv.conf, so I added it, along with my two public IPs that I have named listening on. resolv.conf search www.example.com example.com nameserver 127.0.0.1 nameserver 7.8.9.10 ;Was in here by default, authoritative nameserver of hosting company nameserver 1.2.3.4 ;Public IP #1 nameserver 1.2.3.5 ;Public IP #2 Now when I DIG example.com from the host, it resolves. If I try to DIG from my other server (in the same datacenter), or from the internet, it times out or I get SERVFAIL.

    Read the article

  • sqllite and populating list view, android

    - by Rob Bushway
    I am populating a text and list view from a sqllite database. The data is populating from the cursor correctly (I see the list filling with text rows), but I'm not able to see the actual text in the rows - all I see are empty rows. For the life of me, I can't figure out what I'm not able to see the data in the text rows. My layouts: Tab layout: list layout: row layout: <?xml version="1.0" encoding="utf-8"?> / My topics activity package com.gotquestions.gqapp; import com.gotquestions.gqapp.R.layout; import android.R; import android.app.ListActivity; import android.database.Cursor; import android.database.SQLException; import android.os.Bundle; import android.widget.SimpleCursorAdapter; public class TopicsActivity extends ListActivity { private DataBaseHelper myDbHelper; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // DataBaseHelper myDbHelper = new DataBaseHelper(this); myDbHelper = new DataBaseHelper(this); //test try { myDbHelper.openDataBase(); }catch(SQLException sqle){ throw sqle; } setContentView(layout.list_layout); Cursor c = myDbHelper.fetchAllTopics(); startManagingCursor(c); String[] from = new String[] { DataBaseHelper.KEY_TITLE }; int[] to = new int[] { R.id.text1 }; // Now create an array adapter and set it to display using our row SimpleCursorAdapter notes = new SimpleCursorAdapter(this, layout.notes_row, c, from, to); setListAdapter(notes); myDbHelper.close(); } } My database helper: package com.gotquestions.gqapp; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import android.content.Context; import android.database.Cursor; import android.database.SQLException; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteException; import android.database.sqlite.SQLiteOpenHelper; import android.util.Log; public class DataBaseHelper extends SQLiteOpenHelper{ //The Android's default system path of your application database. private static String DB_PATH = "/data/data/com.gotquestions.gqapp/databases/"; private static String DB_NAME = "gotquestions_database.mp3"; public static final String KEY_TITLE = "topic_title"; public static final String KEY_ARTICLE_TITLE = "article_title"; public static final String KEY_ROWID = "_id"; private static final String TAG = null; private SQLiteDatabase myDataBase; // private final Context myContext; /** * Constructor * Takes and keeps a reference of the passed context in order to access to the application assets and resources. * @param context */ public DataBaseHelper(Context context) { super(context, DB_NAME, null, 1); this.myContext = context; } /** * Creates a empty database on the system and rewrites it with your own database. * */ public void createDataBase() throws IOException{ boolean dbExist = checkDataBase(); if(dbExist){ //do nothing - database already exist }else{ //By calling this method and empty database will be created into the default system path //of your application so we are gonna be able to overwrite that database with our database. this.getReadableDatabase(); try { copyDataBase(); } catch (IOException e) { throw new Error("Error copying database"); } } } /** * Check if the database already exist to avoid re-copying the file each time you open the application. * @return true if it exists, false if it doesn't */ private boolean checkDataBase(){ SQLiteDatabase checkDB = null; try{ String myPath = DB_PATH + DB_NAME; checkDB = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READONLY); }catch(SQLiteException e){ //database does't exist yet. } if(checkDB != null){ checkDB.close(); } return checkDB != null ? true : false; } /** * Copies your database from your local assets-folder to the just created empty database in the * system folder, from where it can be accessed and handled. * This is done by transfering bytestream. * */ private void copyDataBase() throws IOException{ //Open your local db as the input stream InputStream myInput = myContext.getAssets().open(DB_NAME); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); } public void openDataBase() throws SQLException{ //Open the database String myPath = DB_PATH + DB_NAME; myDataBase = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READONLY); } @Override public synchronized void close() { if(myDataBase != null) myDataBase.close(); super.close(); } @Override public void onCreate(SQLiteDatabase db) { // TODO Auto-generated method stub } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS titles"); onCreate(db); } public Cursor getTitle(long rowId) throws SQLException { Cursor mCursor = myDataBase.query(true, "topics", new String[] { KEY_ROWID, KEY_TITLE }, KEY_ROWID + "=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } public Cursor fetchAllTopics() { return myDataBase.query("topics", new String[] {KEY_ROWID, KEY_TITLE}, null, null, null, null, null); }; public Cursor fetchAllFavorites() { return myDataBase.query("articles", new String[] { KEY_ROWID, KEY_ARTICLE_TITLE}, null, null, null, null, null); }; }

    Read the article

  • Why do I get this error when I try to push my SQLite3 to Postgresql (via Taps) on Cedar Stack?

    - by rhodee
    I've done quite a bit of research on Heroku Dev Center and I am now looking to the community for help. Here is my problem. I can not push my db to Heroku Cedar Stack. I am trying to migrate a sqlite database to postgresql via Taps gem. When I am ready to deploy I run: bundle install --without production heroku run db:push I get the following result: Running db:seed attached to terminal... up, run.17 sh: db:seed: not found heroku run rake db:migrate And when I run the migration: heroku run rake db:migrate I get the following: Running rake db:migrate attached to terminal... up, run.18 rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb) /usr/local/lib/ruby/1.9.1/rake.rb:2367:in `raw_load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:2007:in `block in load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:2058:in `standard_exception_handling' /usr/local/lib/ruby/1.9.1/rake.rb:2006:in `load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:1991:in `run' /usr/local/bin/rake:31:in `<main>' Everytime I push to Heroku (git push heroku master) it fails because my gem file is attempting to install sqlite3 gem-even though its inside of the development and test groups in my Gemfile. My database.yml production environment still points to sqlite adapter even after I have run the following command successfully: heroku config:add BUNDLE_WITHOUT="test development" --app app_name_on_heroku Out of ideas. Please help. If its useful I can post results of my gemfile, heroku ps and logs. Cheers UPDATE: After following @John's direction I now receive the following terminal message. Sending schema Schema: 100% |==========================================| Time: 00:00:07 Sending indexes schema_migrat: 100% |==========================================| Time: 00:00:00 Sending data 4 tables, 6 records schema_migrat: 0% | | ETA: --:--:-- Saving session to push_201111070749.dat.. !!! Caught Server Exception HTTP CODE: 500 Taps Server Error: LoadError: no such file to load -- sequel/adapters/ And the following warnings: ["/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:in require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:inblock in tsk_require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:72:in block in check_requiring_thread'", "<internal:prelude>:10:insynchronize'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:69:in check_requiring_thread'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:intsk_require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/database/connecting.rb:25:in adapter_class'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/database/connecting.rb:54:inconnect'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:119:in connect'", "/app/lib/taps/db_session.rb:14:inconn'", "/app/lib/taps/server.rb:91:in block in <class:Server>'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:in block in route'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:ininstance_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:in route_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:500:inblock (2 levels) in route!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:inblock in route!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:in each'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:inroute!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:601:in dispatch!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:inblock in call!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in instance_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:inblock in invoke'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:ininvoke'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:in call!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:399:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/auth/basic.rb:25:in call'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:inblock in call'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:1005:in synchronize'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:incall'", "/home/heroku_rack/lib/static_assets.rb:9:in call'", "/home/heroku_rack/lib/last_access.rb:15:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:47:in block in call'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:41:ineach'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:41:in call'", "/home/heroku_rack/lib/date_header.rb:14:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/builder.rb:77:in call'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:76:inblock in pre_process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:inpre_process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:57:in process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:42:inreceive_data'", "/app/.bundle/gems/ruby/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in run_machine'", "/app/.bundle/gems/ruby/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:inrun'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/backends/base.rb:57:in start'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/server.rb:156:instart'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/controllers/controller.rb:80:in start'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/runner.rb:177:inrun_command'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/runner.rb:143:in run!'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/bin/thin:6:in'", "/usr/ruby1.9.2/bin/thin:19:in load'", "/usr/ruby1.9.2/bin/thin:19:in'"]

    Read the article

  • ASM programming, how to use loop?

    - by chris
    Hello. Im first time here.I am a college student. I've created a simple program by using assembly language. And im wondering if i can use loop method to run it almost samething as what it does below the program i posted. and im also eager to find someome who i can talk through MSN messanger so i can ask you questions right away.(if possible) ok thank you .MODEL small .STACK 400h .data prompt db 10,13,'Please enter a 3 digit number, example 100:',10,13,'$' ;10,13 cause to go to next line first_digit db 0d second_digit db 0d third_digit db 0d Not_prime db 10,13,'This number is not prime!',10,13,'$' prime db 10,13,'This number is prime!',10,13,'$' question db 10,13,'Do you want to contine Y/N $' counter dw 0d number dw 0d half dw ? .code Start: mov ax, @data ;establish access to the data segment mov ds, ax mov number, 0d LetsRoll: mov dx, offset prompt ; print the string (please enter a 3 digit...) mov ah, 9h int 21h ;execute ;read FIRST DIGIT mov ah, 1d ;bios code for read a keystroke int 21h ;call bios, it is understood that the ascii code will be returned in al mov first_digit, al ;may as well save a copy sub al, 30h ;Convert code to an actual integer cbw ;CONVERT BYTE TO WORD. This takes whatever number is in al and ;extends it to ax, doubling its size from 8 bits to 16 bits ;The first digit now occupies all of ax as an integer mov cx, 100d ;This is so we can calculate 100*1st digit +10*2nd digit + 3rd digit mul cx ;start to accumulate the 3 digit number in the variable imul cx ;it is understood that the other operand is ax ;AND that the result will use both dx::ax ;but we understand that dx will contain only leading zeros add number, ax ;save ;variable <number> now contains 1st digit * 10 ;---------------------------------------------------------------------- ;read SECOND DIGIT, multiply by 10 and add in mov ah, 1d ;bios code for read a keystroke int 21h ;call bios, it is understood that the ascii code will be returned in al mov second_digit, al ;may as well save a copy sub al, 30h ;Convert code to an actual integer cbw ;CONVERT BYTE TO WORD. This takes whatever number is in al and ;extends it to ax, boubling its size from 8 bits to 16 bits ;The first digit now occupies all of ax as an integer mov cx, 10d ;continue to accumulate the 3 digit number in the variable mul cx ;it is understood that the other operand is ax, containing first digit ;AND that the result will use both dx::ax ;but we understand that dx will contain only leading zeros. Ignore them add number, ax ;save -- nearly finished ;variable <number> now contains 1st digit * 100 + second digit * 10 ;---------------------------------------------------------------------- ;read THIRD DIGIT, add it in (no multiplication this time) mov ah, 1d ;bios code for read a keystroke int 21h ;call bios, it is understood that the ascii code will be returned in al mov third_digit, al ;may as well save a copy sub al, 30h ;Convert code to an actual integer cbw ;CONVERT BYTE TO WORD. This takes whatever number is in al and ;extends it to ax, boubling its size from 8 bits to 16 bits ;The first digit now occupies all of ax as an integer add number, ax ;Both my variable number and ax are 16 bits, so equal size mov ax, number ;copy contents of number to ax mov cx, 2h div cx ;Divide by cx mov half, ax ;copy the contents of ax to half mov cx, 2h; mov ax, number; ;copy numbers to ax xor dx, dx ;flush dx jmp prime_check ;jump to prime check print_question: mov dx, offset question ;print string (do you want to continue Y/N?) mov ah, 9h int 21h ;execute mov ah, 1h int 21h ;execute cmp al, 4eh ;compare je Exit ;jump to exit cmp al, 6eh ;compare je Exit ;jump to exit cmp al, 59h ;compare je Start ;jump to start cmp al, 79h ;compare je Start ;jump to start prime_check: div cx; ;Divide by cx cmp dx, 0h ;reset the value of dx je print_not_prime ;jump to not prime xor dx, dx; ;flush dx mov ax, number ;copy the contents of number to ax cmp cx, half ;compare half with cx je print_prime ;jump to print prime section inc cx; ;increment cx by one jmp prime_check ;repeat the prime check print_prime: mov dx, offset prime ;print string (this number is prime!) mov ah, 9h int 21h ;execute jmp print_question ;jumps to question (do you want to continue Y/N?) this is for repeat print_not_prime: mov dx, offset Not_prime ;print string (this number is not prime!) mov ah, 9h int 21h ;execute jmp print_question ;jumps to question (do you want to continue Y/N?) this is for repeat Exit: mov ah, 4ch int 21h ;execute exit END Start

    Read the article

  • status update error (null field)

    - by ejah85
    hai guys... i .ve the problem that i cannot be recovered yet... i have one form where admin need to approve or reject the booking request... i've set the b_status field in table usage IN PROCESS default value... i want to update the b_status value BOOKING APPROVED when user click APPROVE button.. otherwise, the b_status will update the value as BOOKING REJECTED when user click on the REJECT button here's is the form code: <?php $db = mysql_connect('localhost','root') or die ("unable to connect"); mysql_select_db('fyp',$db) or die ("able to select"); $sql="SELECT * FROM vehicle WHERE v_status='READY'"; $result = mysql_query($sql) or die ("Query failed!"); ?> <tr><td>&nbsp;</td></tr> <tr> <tr> <td width="200"><font face="Arial" size="2" font color="#000000">Registration Number </font></td> <td><select name="regno"> <option value="" selected>--Registration No--</option> <?php while($row = mysql_fetch_array($result)){?> <option value="<?php echo $row['regno']; ?>"><?php echo $row['regno']; ?></option> <?php } ?> </select></td> <td><font face="Arial" size="2" font color="#000000">Reason</font></td> <td><textarea name="reason" rows="3" cols="50 "value = ""></textarea></td> </tr> <?php $db = mysql_connect('localhost','root') or die ("unable to connect"); mysql_select_db('fyp',$db) or die ("able to select"); $sql="SELECT * FROM driver WHERE d_status='READY'"; $result = mysql_query($sql) or die ("Query failed!"); ?> <tr> <td><font face="Arial" size="2" font color="#000000">Driver</font></td> <td><select id = "d_name" name="d_name"> <option value="" selected>--Driver Name--</option> <?php while($row = mysql_fetch_array($result)){?> <option value="<?php echo $row['d_name']; ?>"><?php echo $row['d_name']; ?></option> <?php } ?> </select></td> </tr> <tr> <?php mysql_close($db); ?> </table> <p></p> <center><input name="APPROVED" type="submit" id="APPROVED" value="APPROVED"> <input name="REJECT" type="submit" id="REJECT" value="REJECT"> </center> </div> </center> and this is the process page code: <?php $db = mysql_connect('localhost','root') or die ("unable to connect"); mysql_select_db('fyp',$db) or die ("able to select"); $bookingno=mysql_real_escape_string($_POST['bookingno']); $username=mysql_real_escape_string($_POST['username']); $name=mysql_real_escape_string($_POST['name']); $department=mysql_real_escape_string($_POST['department']); $g_date=mysql_real_escape_string($_POST['g_date']); $g_time=mysql_real_escape_string($_POST['g_time']); $r_date=mysql_real_escape_string($_POST['r_date']); $r_time=mysql_real_escape_string($_POST['r_time']); $destination=mysql_real_escape_string($_POST['destination']); $pass_num=mysql_real_escape_string($_POST['pass_num']); $trip_purpose=mysql_real_escape_string($_POST['trip_purpose']); $regno=mysql_real_escape_string($_POST['regno']); $d_name=mysql_real_escape_string($_POST['d_name']); $reason=mysql_real_escape_string($_POST['reason']); $b_status=mysql_real_escape_string($_POST['b_status']); $sql = "INSERT INTO `usage` VALUES('$bookingno','$username','$name','$department','$g_date','$g_time','$r_date','$r_time','$destination', '$pass_num','$trip_purpose','$regno','$d_name','$reason','$b_status')"; $query = "INSERT INTO `usage` VALUES b_status ='BOOKING APPROVED'"; $result = @mysql_query($query); $query1 = "UPDATE driver SET d_status ='OUT' WHERE '$d_name'=d_name"; $result1 = @mysql_query($query1); if(isset($_POST['APPROVED'])) { $query2 = "UPDATE `usage` SET b_status ='BOOKING APPROVED' WHERE '$b_status'='IN PROCESS'"; $result2 = @mysql_query($query2); } if (isset($_POST['REJECT'])) { $query3 = "UPDATE `usage` SET b_status ='BOOKING REJECTED' WHERE '$b_status'='IN PROCESS'"; $result3 = @mysql_query($query3); } //$result = mysql_query($sql) or die ("error!"); $result = mysql_query($sql) or trigger_error (mysql_error().' in '.$sql); i.ve the problem on the b_status field.. plz guys... help me ya :-)

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >