Search Results

Search found 29811 results on 1193 pages for 'table of contents'.

Page 273/1193 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Database Web Service using Toplink DB Provider

    - by Vishal Jain
    With JDeveloper 11gR2 you can now create database based web services using JAX-WS Provider. The key differences between this and the already existing PL/SQL Web Services support is:Based on JAX-WS ProviderSupports SQL Queries for creating Web ServicesSupports Table CRUD OperationsThis is present as a new option in the New Gallery under 'Web Services'When you invoke the New Gallery option, it present you with three options to choose from:In this entry I will explain the options of creating service based on SQL queries and Table CRUD operations.SQL Query based Service When you select this option, on 'Next' page it asks you for the DB Conn details. You can also choose if you want SOAP 1.1 or 1.2 format. For this example, I will proceed with SOAP 1.1, the default option.On the Next page, you can give the SQL query. The wizard support Bind Variables, so you can parametrize your queries. Give "?" as a input parameter you want to give at runtime, and the "Bind Variables" button will get enabled. Here you can specify the name and type of the variable.Finish the wizard. Now you can test your service in Analyzer:See that the bind variable specified comes as a input parameter in the Analyzer Input Form:CRUD OperationsFor this, At Step 2 of Wizard, select the radio button "Generate Table CRUD Service Provider"At the next step, select the DB Connection and the table for which you want to generate the default set of operations:Finish the Wizard. Now, run the service in Analyzer for a quick check.See that all the basic operations are exposed:

    Read the article

  • Understand how the TLB (Translation Lookaside buffer) works and interacts with pagetable and addresses

    - by Darxval
    So I am trying to understand this TLB (Translation Lookaside Buffer). But I am having a hard time grasping it. in context of having two streams of addresses, tlb and pagetable. I don't understand the association of the TLB to the streamed addresses/tags and page tables. a. 4669, 2227, 13916, 34587, 48870, 12608, 49225 b. 12948, 49419, 46814, 13975, 40004, 12707 TLB Valid Tag Physical Page Number 1 11 12 1 7 4 1 3 6 0 4 9 Page Table Valid Physical Page or in Disk 1 5 0 Disk 0 Disk 1 6 1 9 1 11 0 Disk 1 4 0 Disk 0 Disk 1 3 1 12 How does the TLB work with the pagetable and addresses? The homework question given is: Given the address stream in the table, and the initial TLB and page table states shown above, show the final state of the system also list for each reference if it is a hit in the TLB, a hit in the page table or a page fault. But I think first i just need to know how does the TLB work with these other elements and how to determine things. How do I even start to answer this question?

    Read the article

  • Better way to design a database

    - by cMinor
    I have a conceptual problem and I would like to get your ideas on how I'll be able to do what I am aiming. My goal is to create a database with information of persons who work at a place depending on their profession and skills,and keep control of salary and projects (how much would cost summing all the hours of work) I have 3 categories which can have subcategories: Outsourcing Technician welder turner assistant Administrative supervisor manager So each person has its information and the projects they are working on, also one person may do several jobs... I was thinking about having 5 tables (EMPLOYEE, SKILLS, PROYECTS, SALARY, PROFESSION) but I guess there is a better way of doing this. create table Employee ( PRIMARY KEY [Person_ID] int(10), [Name] varchar(30), [sex] varchar(10), [address] varchar(10), [profession] varchar(10), [Skills_ID] int(10), [Proyect_ID] int(10), [Salary_ID] int(10), [Salary] float ) create table Skills ( PRIMARY KEY [Skills_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID), [Skills_pay] float(10), [Comments] varchar(50) ) create table Proyects ( PRIMARY KEY [Proyect_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID) [Proyect_name] varchar(10), [working_Hours] float(10), [Comments] varchar(50) ) create table Salary ( PRIMARY KEY [Salary_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID) [Proyect_name] varchar(10), [working_Hours] float(10), [Comments] varchar(50) ) So to get the total amount of the cost of a project I would just sum the working hours of each employee envolved and sum some extra costs in an aggregate query. Is there a way to do this in a more efficient way? What to add or delete of this small model? I guess I am missing something in the salary - maybe I need another table for that?

    Read the article

  • SQL Import to update existing records?

    - by Kenundrum
    I've got a table in a database that contains costs for items that gets updated monthly. To update these costs, we have someone export the table, do some magic in excel, and then import the table back to the database. We're running MSSQL 2005 and using the built in SQL Management Studio. The problem is that when importing back into the table, we have to delete all the records before we import or else we'll get errors. The ideal situation would be for the import to recognize the primary keys and then update the records instead of trying to create a second record with a duplicate key- halting the import. The best illustration of the behavior we're trying to get can be found at http://sqlmanager.net/en/products/mssql/dataimport/documentation/hs2180.html the update or insert example. Is something like this possible with the built in tools or do we have to get third party software to make it happen?

    Read the article

  • Top 5 Developer Enabling Nuggets in MySQL 5.6

    - by Rob Young
    MySQL 5.6 is truly a better MySQL and reflects Oracle's commitment to the evolution of the most popular and widelyused open source database on the planet.  The feature-complete 5.6 release candidate was announced at MySQL Connect in late September and the production-ready, generally available ("GA") product should be available in early 2013.  While the message around 5.6 has been focused mainly on mass appeal, advanced topics like performance/scale, high availability, and self-healing replication clusters, MySQL 5.6 also provides many developer-friendly nuggets that are designed to enable those who are building the next generation of web-based and embedded applications and services. Boiling down the 5.6 feature set into a smaller set, of simple, easy to use goodies designed with developer agility in mind, these things deserve a quick look:Subquery Optimizations Using semi-JOINs and late materialization, the MySQL 5.6 Optimizer delivers greatly improved subquery performance. Specifically, the optimizer is now more efficient in handling subqueries in the FROM clause; materialization of subqueries in the FROM clause is now postponed until their contents are needed during execution. Additionally, the optimizer may add an index to derived tables during execution to speed up row retrieval. Internal tests run using the DBT-3 benchmark Query #13, shown below, demonstrate an order of magnitude improvement in execution times (from days to seconds) over previous versions. select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)from customer, orders, lineitemwhere o_orderkey in (                select l_orderkey                from lineitem                group by l_orderkey                having sum(l_quantity) > 313  )  and c_custkey = o_custkey  and o_orderkey = l_orderkeygroup by c_name, c_custkey, o_orderkey, o_orderdate, o_totalpriceorder by o_totalprice desc, o_orderdateLIMIT 100;What does this mean for developers?  For starters, simplified subqueries can now be coded instead of complex joins for cross table lookups: SELECT title FROM film WHERE film_id IN (SELECT film_id FROM film_actor GROUP BY film_id HAVING count(*) > 12); And even more importantly subqueries embedded in packaged applications no longer need to be re-written into joins.  This is good news for both ISVs and their customers who have access to the underlying queries and who have spent development cycles writing, testing and maintaining their own versions of re-written queries across updated versions of a packaged app.The details are in the MySQL 5.6 docs. Online DDL OperationsToday's web-based applications are designed to rapidly evolve and adapt to meet business and revenue-generationrequirements. As a result, development SLAs are now most often measured in minutes vs days or weeks. For example, when an application must quickly support new product lines or new products within existing product lines, the backend database schema must adapt in kind, and most commonly while the application remains available for normal business operations.  MySQL 5.6 supports this level of online schema flexibility and agility by providing the following new ALTER TABLE online DDL syntax additions:  CREATE INDEX DROP INDEX Change AUTO_INCREMENT value for a column ADD/DROP FOREIGN KEY Rename COLUMN Change ROW FORMAT, KEY_BLOCK_SIZE for a table Change COLUMN NULL, NOT_NULL Add, drop, reorder COLUMN Again, the details are in the MySQL 5.6 docs. Key-value access to InnoDB via Memcached APIMany of the next generation of web, cloud, social and mobile applications require fast operations against simple Key/Value pairs. At the same time, they must retain the ability to run complex queries against the same data, as well as ensure the data is protected with ACID guarantees. With the new NoSQL API for InnoDB, developers have allthe benefits of a transactional RDBMS, coupled with the performance capabilities of Key/Value store.MySQL 5.6 provides simple, key-value interaction with InnoDB data via the familiar Memcached API.  Implemented via a new Memcached daemon plug-in to mysqld, the new Memcached protocol is mapped directly to the native InnoDB API and enables developers to use existing Memcached clients to bypass the expense of query parsing and go directly to InnoDB data for lookups and transactional compliant updates.  The API makes it possible to re-use standard Memcached libraries and clients, while extending Memcached functionality by integrating a persistent, crash-safe, transactional database back-end.  The implementation is shown here:So does this option provide a performance benefit over SQL?  Internal performance benchmarks using a customized Java application and test harness show some very promising results with a 9X improvement in overall throughput for SET/INSERT operations:You can follow the InnoDB team blog for the methodology, implementation and internal test cases that generated these results here. How to get started with Memcached API to InnoDB is here. New Instrumentation in Performance SchemaThe MySQL Performance Schema was introduced in MySQL 5.5 and is designed to provide point in time metrics for key performance indicators.  MySQL 5.6 improves the Performance Schema in answer to the most common DBA and Developer problems.  New instrumentations include: Statements/Stages What are my most resource intensive queries? Where do they spend time? Table/Index I/O, Table Locks Which application tables/indexes cause the most load or contention? Users/Hosts/Accounts Which application users, hosts, accounts are consuming the most resources? Network I/O What is the network load like? How long do sessions idle? Summaries Aggregated statistics grouped by statement, thread, user, host, account or object. The MySQL 5.6 Performance Schema is now enabled by default in the my.cnf file with optimized and auto-tune settings that minimize overhead (< 5%, but mileage will vary), so using the Performance Schema ona production server to monitor the most common application use cases is less of an issue.  In addition, new atomic levels of instrumentation enable the capture of granular levels of resource consumption by users, hosts, accounts, applications, etc. for billing and chargeback purposes in cloud computing environments.The MySQL docs are an excellent resource for all that is available and that can be done with the 5.6 Performance Schema. Better Condition Handling - GET DIAGNOSTICSMySQL 5.6 enables developers to easily check for error conditions and code for exceptions by introducing the new MySQL Diagnostics Area and corresponding GET DIAGNOSTICS interface command. The Diagnostic Area can be populated via multiple options and provides 2 kinds of information:Statement - which provides affected row count and number of conditions that occurredCondition - which provides error codes and messages for all conditions that were returned by a previous operation The addressable items for each are: The new GET DIAGNOSTICS command provides a standard interface into the Diagnostics Area and can be used via the CLI or from within application code to easily retrieve and handle the results of the most recent statement execution.  An example of how it is used might be:mysql> DROP TABLE test.no_such_table; ERROR 1051 (42S02): Unknown table 'test.no_such_table' mysql> GET DIAGNOSTICS CONDITION 1 -> @p1 = RETURNED_SQLSTATE, @p2 = MESSAGE_TEXT; mysql> SELECT @p1, @p2; +-------+------------------------------------+| @p1   | @p2                                | +-------+------------------------------------+| 42S02 | Unknown table 'test.no_such_table' | +-------+------------------------------------+ Options for leveraging the MySQL Diagnotics Area and GET DIAGNOSTICS are detailed in the MySQL Docs.While the above is a summary of some of the key developer enabling 5.6 features, it is by no means exhaustive. You can dig deeper into what MySQL 5.6 has to offer by reading this developer zone article or checking out "What's New in MySQL 5.6" in the MySQL docs.BONUS ALERT!  If you are developing on Windows or are considering MySQL as an alternative to SQL Server for your next project, application or shipping product, you should check out the MySQL Installer for Windows.  The installer includes the MySQL 5.6 RC database, all drivers, Visual Studio and Excel plugins, tray monitor and development tools all a single download and GUI installer.   So what are your next steps? Register for Dec. 13 "MySQL 5.6: Building the Next Generation of Web-Based Applications and Services" live web event.  Hurry!  Seats are limited. Download the MySQL 5.6 Release Candidate (look under the Development Releases tab) Provide Feedback <link to http://bugs.mysql.com/> Join the Developer discussion on the MySQL Forums Explore all MySQL Products and Developer Tools As always, thanks for your continued support of MySQL!

    Read the article

  • Setting up a dualboot by installing cloned partitions using clonezilla

    - by Nimjox
    I'm trying to setup a dual boot system where I have Windows 7 and Linux Mint. Here's the kicker both are partitions I've saved using Clonzezilla from different places and to make matters worse Linux Mint is formated as a LVM. I need both of these images specifically as windows is a corporate image that I must use and the other is a development image that took me a week to setup. I've gotten it almost all working but my issue is that I can't get clonezilla to not mess up the partition table of Windows when installing Mint or vise-vera. I can use the (-k1 option) which doens't copy the partition table but then I have a unusable partition when it clones and I'm not sure how to fix the partition table. Here's what I'm doing: Using Gparted to make partitions sda1 40GB ntfs (windows), sda2 extended 70GB, sda5 lvm2 pv 69.99 GB (Linux), sda3 500MB (GRUB) Clonezilla windows image into sda1 partition (keeping partition table) Clonezilla linux image into sda5 partition (not recreating partition table) After all that I can boot into windows using the default MBR. I can use rescue-repair cd to reinstall GRUB which will see Windows 7 but I can't get it to see the Linux OS. I'm thinking its because of the sda5 partition but I'm not sure any ideas on what I could do to get this working or where I might be going wrong. If there is any additional detail you need please let me know and I'll edit as this is a lot.

    Read the article

  • SQL SERVER – Select Columns from Stored Procedure Resultset

    - by Pinal Dave
    It is fun to go back to basics often. Here is the one classic question: “How to select columns from Stored Procedure Resultset?” Though Stored Procedure has been introduced many years ago, the question about retrieving columns from Stored Procedure is still very popular with beginners. Let us see the solution in quick steps. First we will create a sample stored procedure. CREATE PROCEDURE SampleSP AS SELECT 1 AS Col1, 2 AS Col2 UNION SELECT 11, 22 GO Now we will create a table where we will temporarily store the result set of stored procedures. We will be using INSERT INTO and EXEC command to retrieve the values and insert into temporary table. CREATE TABLE #TempTable (Col1 INT, Col2 INT) GO INSERT INTO #TempTable EXEC SampleSP GO Next we will retrieve our data from stored procedure. SELECT * FROM #TempTable GO Finally we will clean up all the objects which we have created. DROP TABLE #TempTable DROP PROCEDURE SampleSP GO Let me know if you want me to share such back to basic tips. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, T SQL

    Read the article

  • JDBCRealm can't find sqlite file

    - by Tom A
    My authentication fails with java.sql.SQLException: no such table: credentials where credentials is the name of the user/password table. I have checked the db file and the table is there. I think you also get this error when sqlite jdbc can't even find the file. I am specifying my realm in a META-INF/context.xml file. Is there any trick to getting the path right? I have tried just about everything I can think of.

    Read the article

  • MySQL privileges - DROP tables, not databases

    - by Michal M
    Hi, Can someone help me with privileges here. I need to create a user that can DROP tables within databases but cannot DROP the databases? From what I understand from MySQL docs you cannot simply do this: The DROP privilege enables you to drop (remove) existing databases, tables, and views. Beginning with MySQL 5.1.10, the DROP privilege is also required in order to use the statement ALTER TABLE ... DROP PARTITION on a partitioned table. Beginning with MySQL 5.1.16, the DROP privilege is required for TRUNCATE TABLE (before that, TRUNCATE TABLE requires the DELETE privilege). Any ideas?

    Read the article

  • Access losing db connections

    - by Dwight T
    I have a weird problem going on at work. People have been using MS Access to connect to sql server db and lately people are getting sporadic problems with connecting to the servers. It's not always the same users and it's not always a problem, which makes it a real pain to try to solve. One example of a related problem. A person has a linked table to a table and she would filter the table or write a query on the table to return where itemsku = 'ABCD1234'. It would return one record but the ItemSku LMKN7486 and everytime it would return the wrong record but consistently the wrong record so itemsku abcd1234 always returned LMNK7486 One would think it might be a driver problem, but it also could be a user problem. Just posting the question to see if anyone else has had similar senerios. Thanks

    Read the article

  • Azure eBook Update #1 &ndash; 16 authors so far!

    - by Eric Nelson
    I just wanted to share with folks where we are up to with the Windows Azure eBook (Check out the original post for full details) I have had lots of great submissions from folks with some awesome stuff to share on Azure. Currently we have 16 authors and 25 proposed articles. There is still a couple of days left to submit your proposal if you would like to get involved (see the original post ) and some topic suggestions below for which we don’t currently have authors. It is official – I’m excited! :-) Article Area Accepted Wikipedia Explorer: A case study how we did it and why. CaseSetudy Optional Patterns for the Windows Azure Platform (picking up 1 or 2 patterns that seem to be evolving) Architecture Optional Azure and cost-oriented architecture. Architecture Yes Code walkthrough of a comprehensive application submitted to newCloudApp contest CaseSetudy Yes Principles of highly scalable apps on Azure Compute Optional Auto-Scaling Azure Compute Yes Implementing a distributed cache using memcached with worker roles Interop Yes Building a content-based router service to direct requests to internal HTTP endpoints Compute Optional How to debug an Azure app by with a custom TraceListener & the AppFabric Service Bus AppFabric Yes How to host Java apps in Azure Interop Yes Bing Maps Tile Servers using Azure Blog Storage Interop Yes Tricks for storing time and date fields in Table Storage Storage Yes Service Runtime in Windows Azure Compute Yes Azure Drive Storage Optional Queries in Azure Table Storage Optional Getting RubyOnRails running on Azure Interop Yes Consuming Azure services within Windows Phone Interop Yes De-risking Your First Azure Project Architecture Yes Designing for failure Architecture Optional Connecting to SQL Azure In x Minutes SQLAzure Yes Using Azure Table Service as a NoSQL store via the REST API Storage Yes Azure Table Service REST API Storage Optional Threading, Scalability and Reliability in the Cloud Compute Yes Azure Diagnostics Compute Yes 5 steps to getting started with Windows Azure Introduction Yes The best tools for working with Windows Azure Tools Author Needed Understanding how SQL Azure works SQLAzure Author Needed Getting started with AppFabric Control Services AppFabric Author Needed Using the Microsoft Sync Framework with SQL Azure SQLAzure Author Needed Dallas - just a TV show or something more? Dallas Author Needed Comparing Azure to other cloud offerings Interop Author Needed Hybrid solutions using Azure and on-premise Interop Author Needed

    Read the article

  • How do I change the format of group by data in Excel 2003 pivot tables

    - by Bernard
    I have a data that lists the value of a contract per contract number. So I've created a pivot table that counts the number of contracts valued at 0 - 10, 11 - 20, etc. I want to be able to format the group, e.g. $0 - $10, $11 - $20 I've tried formatting the underlying data as currency and formatting the column in the pivot table, but it still shows as 0 - 10, 11 - 20 Also I have a column in the pivot table that says Total which is the Count of the number of contracts in that range, i.e. Value Total 0 - 10 1 11 - 20 1 Its an autogenerated column heading that Excel put in. How do I change this to say Number of contracts. I want it changed because when I chart the pivot table, the series is called called Total :(

    Read the article

  • Multiple Internet connections, multiple networks and split access in Linux

    - by Swapneel Patnekar
    I am having trouble setting up multiple internet connections for split access in Linux. We have 3 internet connections from 3 different ISP's. We want to configure our Linux gateway machine such that our three internal networks 10.2.1.0/24, 192.168.20.0/24 & 192.168.2.0/24 use ISP1, ISP2 and ISP3 respectively in a split access manner. Outlined below is the layout/settings, Interfaces of the Linux Gateway connected to Routers: eth0: 10.1.1.2<---------->10.1.1.1(Internal Interface of ADSL Router)[ISP1] eth1: 192.168.15.2<------>192.168.15.1(Internal Interface of 3G Router)[ISP2] eth3: 192.168.1.2<------->192.168.1.1(Internal Interface of ADSL Router)[ISP3] Kindly note that none of the interfaces in the Linux gateway has a public static IP address. Routers of ISP1 and ISP2 get assigned a dynamic public IP address when connected to the Internet, router of ISP3 has been assigned a public static IP address. Interface of Linux gateway connected to a switch, eth4: 10.2.1.1(LAN Interface for ISP1) eth4:0 192.168.20.1(LAN interface for ISP2) eth4:1 192.168.2.1(LAN Interface for ISP3) eth4:0 & eth4:1 are virtual interfaces with eth4 being the interface connected physically. Based on http://linux-ip.net/html/adv-multi-internet.html I've set the following routes, ip route flush table 4 ip route show table main | grep -Ev ^default | while read ROUTE ; do ip route add table 4 $ROUTE done ip route add table 4 default via 192.168.15.1 ip rule add fwmark 4 table 4 ip route flush cache Additionally, using the following iptables rules to mark & route packets as per the guide mentioned above : http://pastebin.com/KzWHFGJA At this point, computers from 192.168.2.0/24 network are successfully able to reach the Internet through ISP3. 192.168.20.0/24 and 10.2.1.0/24 are unable to access the Internet through ISP1 and ISP2 respectively. Any inputs will be much appreciated !

    Read the article

  • EC2 instance suddenly refusing SSH connections and won't respond to ping

    - by Chris
    My instance was running fine and this morning I was able to access a Ruby on Rails app hosted on it. An hour later I suddenly wasn't able to access my site, my SSH connection attempts were refused and the server wasn't even responding to ping. I didn't change anything on my system during that hour and reboots aren't fixing it. I've never had any problems connecting or pinging the system before. Can someone please help? This is on my production system! OS: CentOS 5 AMI ID: ami-10b55379 Type: m1.small [] ~% ssh -v *****@meeteor.com OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to meeteor.com [184.73.235.191] port 22. debug1: connect to address 184.73.235.191 port 22: Connection refused ssh: connect to host meeteor.com port 22: Connection refused [] ~% ping meeteor.com PING meeteor.com (184.73.235.191): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 ^C --- meeteor.com ping statistics --- 4 packets transmitted, 0 packets received, 100.0% packet loss [] ~% ========= System Log ========= Restarting system. Linux version 2.6.16-xenU ([email protected]) (gcc version 4.0.1 20050727 (Red Hat 4.0.1-5)) #1 SMP Mon May 28 03:41:49 SAST 2007 BIOS-provided physical RAM map: Xen: 0000000000000000 - 000000006a400000 (usable) 980MB HIGHMEM available. 727MB LOWMEM available. NX (Execute Disable) protection: active IRQ lockup detection disabled Built 1 zonelists Kernel command line: root=/dev/sda1 ro 4 Enabling fast FPU save and restore... done. Enabling unmasked SIMD FPU exception support... done. Initializing CPU#0 PID hash table entries: 4096 (order: 12, 65536 bytes) Xen reported: 2599.998 MHz processor. Dentry cache hash table entries: 131072 (order: 7, 524288 bytes) Inode-cache hash table entries: 65536 (order: 6, 262144 bytes) Software IO TLB disabled vmalloc area: ee000000-f53fe000, maxmem 2d7fe000 Memory: 1718700k/1748992k available (1958k kernel code, 20948k reserved, 620k data, 144k init, 1003528k highmem) Checking if this processor honours the WP bit even in supervisor mode... Ok. Calibrating delay using timer specific routine.. 5202.30 BogoMIPS (lpj=26011526) Mount-cache hash table entries: 512 CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line) CPU: L2 Cache: 1024K (64 bytes/line) Checking 'hlt' instruction... OK. Brought up 1 CPUs migration_cost=0 Grant table initialized NET: Registered protocol family 16 Brought up 1 CPUs xen_mem: Initialising balloon driver. highmem bounce pool size: 64 pages VFS: Disk quotas dquot_6.5.1 Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) Initializing Cryptographic API io scheduler noop registered io scheduler anticipatory registered (default) io scheduler deadline registered io scheduler cfq registered i8042.c: No controller found. RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize Xen virtual console successfully installed as tty1 Event-channel device installed. netfront: Initialising virtual ethernet driver. mice: PS/2 mouse device common for all mice md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27 md: bitmap version 4.39 NET: Registered protocol family 2 Registering block device major 8 IP route cache hash table entries: 65536 (order: 6, 262144 bytes) TCP established hash table entries: 262144 (order: 9, 2097152 bytes) TCP bind hash table entries: 65536 (order: 7, 524288 bytes) TCP: Hash tables configured (established 262144 bind 65536) TCP reno registered TCP bic registered NET: Registered protocol family 1 NET: Registered protocol family 17 NET: Registered protocol family 15 Using IPI No-Shortcut mode md: Autodetecting RAID arrays. md: autorun ... md: ... autorun DONE. kjournald starting. Commit interval 5 seconds EXT3-fs: mounted filesystem with ordered data mode. VFS: Mounted root (ext3 filesystem) readonly. Freeing unused kernel memory: 144k freed *************************************************************** *************************************************************** ** WARNING: Currently emulating unsupported memory accesses ** ** in /lib/tls glibc libraries. The emulation is ** ** slow. To ensure full performance you should ** ** install a 'xen-friendly' (nosegneg) version of ** ** the library, or disable tls support by executing ** ** the following as root: ** ** mv /lib/tls /lib/tls.disabled ** ** Offending process: init (pid=1) ** *************************************************************** *************************************************************** Pausing... 5Pausing... 4Pausing... 3Pausing... 2Pausing... 1Continuing... INIT: version 2.86 booting Welcome to CentOS release 5.4 (Final) Press 'I' to enter interactive startup. Setting clock : Fri Oct 1 14:35:26 EDT 2010 [ OK ] Starting udev: [ OK ] Setting hostname localhost.localdomain: [ OK ] No devices found Setting up Logical Volume Management: [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext3 (1) -- /] fsck.ext3 -a /dev/sda1 /dev/sda1: clean, 275424/1310720 files, 1161123/2621440 blocks [ OK ] Remounting root filesystem in read-write mode: [ OK ] Mounting local filesystems: [ OK ] Enabling local filesystem quotas: [ OK ] Enabling /etc/fstab swaps: [ OK ] INIT: Entering runlevel: 4 Entering non-interactive startup Starting background readahead: [ OK ] Applying ip6tables firewall rules: modprobe: FATAL: Module ip6_tables not found. ip6tables-restore v1.3.5: ip6tables-restore: unable to initializetable 'filter' Error occurred at line: 3 Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information. [FAILED] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_ns [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining IP information for eth0... done. [ OK ] Starting auditd: [FAILED] Starting irqbalance: [ OK ] Starting portmap: [ OK ] FATAL: Module lockd not found. Starting NFS statd: [ OK ] Starting RPC idmapd: FATAL: Module sunrpc not found. FATAL: Error running install command for sunrpc Error: RPC MTAB does not exist. Starting system message bus: [ OK ] Starting Bluetooth services:[ OK ] [ OK ] Can't open RFCOMM control socket: Address family not supported by protocol Mounting other filesystems: [ OK ] Starting PC/SC smart card daemon (pcscd): [ OK ] Starting hidd: Can't open HIDP control socket: Address family not supported by protocol [FAILED] Starting autofs: Starting automount: automount: test mount forbidden or incorrect kernel protocol version, kernel protocol version 5.00 or above required. [FAILED] [FAILED] Starting sshd: [ OK ] Starting cups: [ OK ] Starting sendmail: [ OK ] Starting sm-client: [ OK ] Starting console mouse services: no console device found[FAILED] Starting crond: [ OK ] Starting xfs: [ OK ] Starting anacron: [ OK ] Starting atd: [ OK ] % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 390 100 390 0 0 58130 0 --:--:-- --:--:-- --:--:-- 58130 100 390 100 390 0 0 56984 0 --:--:-- --:--:-- --:--:-- 0 Starting yum-updatesd: [ OK ] Starting Avahi daemon... [ OK ] Starting HAL daemon: [ OK ] Starting OSSEC: [ OK ] Starting smartd: [ OK ] c CentOS release 5.4 (Final) Kernel 2.6.16-xenU on an i686 domU-12-31-39-00-C4-97 login: INIT: Id "2" respawning too fast: disabled for 5 minutes INIT: Id "3" respawning too fast: disabled for 5 minutes INIT: Id "4" respawning too fast: disabled for 5 minutes INIT: Id "5" respawning too fast: disabled for 5 minutes INIT: Id "6" respawning too fast: disabled for 5 minutes

    Read the article

  • SQL SERVER – Copy Database – SQL in Sixty Seconds #067

    - by Pinal Dave
    There are multiple reasons why a user may want to make a copy of the database. Sometimes a user wants to copy the database to the same server and sometime wants to copy the database on a different server. The important point is that DBA and Developer may want copies of their database for various purposes. I copy my database for backup purpose. However, when we hear coping database – the very first thought which comes to our mind is – Backup and Restore or Attach and Detach. Both of these processes have their own advantage and disadvantages. The matter of the fact, those methods is much efficient and recommended methods. However, if you just want to copy your database as it is and do not want to go for advanced feature. You can just use the copy feature of the SQL Server. Here are the settings, which you can use to copy the database. SQL in Sixty Seconds Video I have attempted to explain the same subject in simple words over in following video. Action Item Here are the blog posts I have previously written on the subject of SA password. You can read it over here: Copy Database from Instance to Another Instance – Copy Paste in SQL Server Copy Database With Data – Generate T-SQL For Inserting Data From One Table to Another Table Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video You can subscribe to my YouTube Channel for frequent updates. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Video

    Read the article

  • Getting error when using Oracle imp command to import dmp file

    - by blizz
    I am importing a dmp file using the following command: imp user/pass file=file.dmp log=logfile.log full=y ignore=y destroy=y I get the following errors: . . importing table "MESSAGE_BOARD" IMP-00058: ORACLE error 22993 encountered ORA-22993: specified input amount is greater than actual source amount IMP-00028: partial import of previous table rolled back: 219638 rows rolled back . . importing table "MESSAGE_BOARD_ARCHIVES" IMP-00058: ORACLE error 22993 encountered ORA-22993: specified input amount is greater than actual source amount IMP-00028: partial import of previous table rolled back: 2477960 rows rolled back I have tried increasing the tablespace size and adding a second datafile to no avail. I'm a total newb at Oracle so any help will be appreciated. Googled this for hours and still can't come up with a solution. Thanks!

    Read the article

  • postgresql deleteing old records from log tables

    - by Max
    I have a postgresql database which stores my radius connection information. What I want to do is only store a months worth of logs. How would I craft a sql statement that I can run from cron that would go and delete and rows that where older then a month? Format of the date in the table. that date is taken from acctstoptime collum Date format 2010-01-27 16:02:17-05 Format of the table in question. -- Table: radacct CREATE TABLE radacct ( radacctid bigserial NOT NULL, acctsessionid character varying(32) NOT NULL, acctuniqueid character varying(32) NOT NULL, username character varying(253), groupname character varying(253), realm character varying(64), nasipaddress inet NOT NULL, nasportid character varying(15), nasporttype character varying(32), acctstarttime timestamp with time zone, acctstoptime timestamp with time zone, acctsessiontime bigint, acctauthentic character varying(32), connectinfo_start character varying(50), connectinfo_stop character varying(50), acctinputoctets bigint, acctoutputoctets bigint, calledstationid character varying(50), callingstationid character varying(50), acctterminatecause character varying(32), servicetype character varying(32), xascendsessionsvrkey character varying(10), framedprotocol character varying(32), framedipaddress inet, acctstartdelay integer, acctstopdelay integer, freesidestatus character varying(32), CONSTRAINT radacct_pkey PRIMARY KEY (radacctid) ) WITH (OIDS=FALSE); ALTER TABLE radacct OWNER TO radius; -- Index: freesidestatus CREATE INDEX freesidestatus ON radacct USING btree (freesidestatus); -- Index: radacct_active_user_idx CREATE INDEX radacct_active_user_idx ON radacct USING btree (username, nasipaddress, acctsessionid) WHERE acctstoptime IS NULL; -- Index: radacct_start_user_idx CREATE INDEX radacct_start_user_idx ON radacct USING btree (acctstarttime, username);

    Read the article

  • October 2013 Fusion Middleware (FMW) Proactive Patches released

    - by Irina
    We are glad to announce that the following Fusion Middleware (FMW) Proactive  patches were released on October 15, 2013.Bundle PatchesBundle patches are collections of controlled, well tested critical bug fixes for a specific product  which may include security contents and occasionally minor enhancements. These are cumulative in nature meaning the latest bundle patch in a particular series includes the contents of the previous bundle patches released.  A suite bundle patch is an aggregation of multiple product  bundle patches that are part of a product suite. Oracle Identity Management Suite Bundle Patch 11.1.1.5.5 consisting of Oracle Identity Manager (OIM) 11.1.1.5.9 bundle patch Oracle Access Manager (OAM) 11.1.1.5.6 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.1.5.2 bundle patch. Oracle Entitlement Server (OES) 11.1.1.5.4 bundle patch. Oracle Identity Management Suite Bundle Patch 11.1.2.0.4 consisting of Oracle Access Manager (OAM) 11.1.2.0.4 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.2.0.2 bundle patch. Oracle Entitlement Server (OES) 11.1.2.0.2 bundle patch. Oracle Identity Analytics (OIA ) 11.1.1.5.6  bundle patch. Oracle GlassFish Server (OGFS) 2.1.1.22, 3.0.1.8 and 3.1.2.7 bundle patches. Oracle iPlanet Web Server (OiWS) 7.0.18 bundle patch Oracle SOA Suite (SOA) 11.1.1.7.1 bundle patch Oracle WebCenter Portal (WCP) 11.1.1.8.1 bundle patch Sun Role Manager (SRM) 4.1.7 and 5.0.3.2 bundle patches. Patch Set Updates (PSU)Patch Set Updates (PSU)  are collections of well controlled, well tested critical bug fixes for a specific product  that have been proven in customer environments. PSUs  may include security contents but no  enhancements are included. These are cumulative in nature meaning the latest PSU  in a particular series includes the contents of the previous PSUs  released. Oracle Exalogic 2.0.3.0.4 Physical Linux x86-64 and 2.0.4.0.4 Physical Solaris x86-64 PSUs. Oracle WebLogic Server 10.3.6.0.6 and 12.1.1.0.6 PSUs. Critical Patch Update (CPU)The Critical Patch Update program is Oracle's quarterly release of security fixes.The following additional patches were released as part of Oracle's Critical Patch Update program: Oracle JDeveloper 11.1.2.3.0, 11.1.2.4.0 and 12.1.2.0.0 Oracle Outside In Technology 8.4.0 and  8.4.1 Oracle Portal 11.1.1.6.0 Oracle Security Service  11.1.1.6.0, 11.1.1.7.0 and 12.1.2.0.0 Oracle WebCache 11.1.1.6.0 and 11.1.1.7.0 Oracle WebCenter Content 10.1.3.5.1, 11.1.1.6.0, 11.1.1.7.0 and 11.1.1.8.0 Oracle WebServices 10.1.3.5.0 and 11.1.1.6.0 For more information: Master Notes on Fusion Middleware Proactive Patching PSU and CPU October 2013  Availability Document Critical Patch Update Advisory -  October 2013

    Read the article

  • Visio 2010 Professional No Database Tab

    - by JFB
    I am trying to create a ERD using VISIO Professional 2010 (full install) but I am having problems. I can't open the table properties windows at the bottom of the screen when I double click on a table, and I can't right-click on the table and choose properties because the choice is not there... The Database tab is missing from the ribbon on top. What can cause this? Here is what I already tried: File/Options/Trust Center/ Trust Center Settings.../Add-ins/Disable all Application Add-ins is UNCHECKED

    Read the article

  • SQL Saturday Atlanta: Intro To Performance Tuning

    - by Mike Femenella
    I'm looking forward to speaking in Atlanta on the 24th, will be fun to get back down that way to visit with some friends and present two topics that I really enjoy. First, an introduction to performance tuning. Performance tuning is a very wide and deep topic and we're staying close to the surface. I direct this class for newbie sql users who have less than 2 years of experience. It's all the things I wish someone would have told me in my first 2 years about what to look for when the database was slow...or allegedly slow I should say. We'll cover using profiler to find slow performing queries and how to save the data off to a table as well as a tour of other features. The difference between clustered, non clustered and covering indexes. How to look at and understand an execution plan (at a high level) and finally the difference between a temp table and a table variable and what the implications are of using either one in your code. That pretty much takes up a full hour. Second presentation, Loading Data in Real Time. It's really a presentation about partitioning but with a twist that we used at work recently to solve a need to load some data quickly and put it into production with minimal downtime. We'll cover partition functions, schemes,$partition, merge, sys.partitions and show some examples of building a set of partitioned tables and using the switch statement to move it from one table to another. Finally we'll cover the differences in partitioning between 2005 and 2008. Hope to see you there! And if you read my blog please introduce yourself!

    Read the article

  • Sorting versus hashing

    - by Paul Siegel
    My problem is as follows. I have an array of n strings with m < n of them distinct. I want to create a one-to-one function which assigns each of the m distinct strings to the numbers 0 ... m-1. For example, if my strings are: Bob, Amy, Bob, Charlie, Amy then the function: Bob -> 0, Amy -> 1, Charlie -> 2 would meet my needs. I have thought of three possible approaches: Sort the list of strings, remove duplicates, and construct the function using a search algorithm. Create a hash table and check each string to see if it is already in the table before inserting it. Sort the list of strings, remove duplicates, and put the resulting list into a hash table. My code will be written in Java, and I will likely use standard Java algorithms: merge sort for sorting, binary search for searching, and whatever the standard Java hash table algorithm is. Question: Assume that after creating the function I will have to evaluate it on each of the n original strings. Which of the three approaches is fastest? Is there a better way? Part of the problem is that I don't really know what's going on "under the hood" in standard hashing algorithms. Any help would be appreciated.

    Read the article

  • Reading a large SQL Errorlog

    - by steveh99999
    I came across an interesting situation recently where a SQL instance had been configured with the Audit of successful and failed logins being written to the errorlog. ie This meant… every time a user or the application connected to the SQL instance – an entry was written to the errorlog. This meant…  huge SQL Server errorlogs. Opening an errorlog in the usual way, using SQL management studio, was extremely slow… Luckily, I was able to use xp_readerrorlog to work around this – here’s some example queries..   To show errorlog entries from the currently active log, just for today :- DECLARE @now DATETIME DECLARE @midnight DATETIME SET @now = GETDATE() SET @midnight =  DATEADD(d, DATEDIFF(d, 0, getdate()), 0) EXEC xp_readerrorlog 0,1,NULL,NULL,@midnight,@now   To find out how big the current errorlog actually is, and what the earliest and most recent entries are in the errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT COUNT(*) AS 'Number of entries in errorlog', MIN(logdate) AS 'ErrorLog Starts', MAX(logdate) AS 'ErrorLog Ends' FROM #temp_errorlog DROP TABLE #temp_errorlog To show just DBCC history  information in the current errorlog :- EXEC xp_readerrorlog 0,1,'dbcc'   To show backup errorlog entries in the current errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT * from #temp_errorlog WHERE ProcessInfo = 'Backup' ORDER BY Logdate DROP TABLE #temp_errorlog XP_Errorlog is an undocumented system stored procedure – so no official Microsoft link describing the parameters it takes – however,  there’s a good blog on this here And, if you do have a problem with huge errorlogs – please consider running system stored procedure  sp_cycle_errorlog on a nightly or regular basis.  But if you do this,  remember to change the amount of errorlogs you do retain – the default of 6 might not be sufficient for you….

    Read the article

  • Excel Data Organization: Array Formulas? Tables? Named Range?

    - by Joe Arasin
    I'm trying to make a huge Excel sheet reasonably maintainable, but it's huge in the "hundred-table-db" direction, rather than the "hundred-thousand-row-table" direction. I want to have a baseline data table that looks something like this: | Indicator | Units | 2010 | 2015 | 2020 | 2025 | Source | | GDP | $Gazillion | 300 | 350 | 400 | 450 | BLS | | Population | Millions | 350 | 400 | 450 | 500 | Census | | PetMonkeyPopulation | Thousands | 50 | 60 | 70 | 80 | SimiansRUs | And then be able to have another sheet that looks like: | | 2010 | 2015 | 2020 | 2025 | | MonkeysPerCapita | .1 | .2 | .3 | .4 | | MonkeysPerDollar | .01 | .01 | .01 | .01 | | GDPPerCapita | 300 | 400 | 450 | 600 | Is there some standard way to make this kind of thing maintainable?

    Read the article

  • System Requirements of a write-heavy applications serving hundreds of requests per second

    - by Rolando Cruz
    NOTE: I am a self-taught PHP developer who has little to none experience managing web and database servers. I am about to write a web-based attendance system for a very large userbase. I expect around 1000 to 1500 users logged-in at the same time making at least 1 request every 10 seconds or so for a span of 30 minutes a day, 3 times a week. So it's more or less 100 requests per second, or at the very worst 1000 requests in a second (average of 16 concurrent requests? But it could be higher given the short timeframe that users will make these requests. crosses fingers to avoid 100 concurrent requests). I expect two types of transactions, a local (not referring to a local network) and a foreign transaction. local transactions basically download userdata in their locality and cache it for 1 - 2 weeks. Attendance equests will probably be two numeric strings only: userid and eventid. foreign transactions are for attendance of those do not belong in the current locality. This will pass in the following data instead: (numeric) locality_id, (string) full_name. Both requests are done in Ajax so no HTML data included, only JSON. Both type of requests expect at the very least a single numeric response from the server. I think there will be a 50-50 split on the frequency of local and foreign transactions, but there's only a few bytes of difference anyways in the sizes of these transactions. As of this moment the userid may only reach 6 digits and eventid are 4 to 5-digit integers too. I expect my users table to have at least 400k rows, and the event table to have as many as 10k rows, a locality table with at least 1500 rows, and my main attendance table to increase by 400k rows (based on the number of users in the users table) a day for 3 days a week (1.2M rows a week). For me, this sounds big. But is this really that big? Or can this be handled by a single server (not sure about the server specs yet since I'll probably avail of a VPS from ServInt or others)? I tried to read on multiple server setups Heatbeat, DRBD, master-slave setups. But I wonder if they're really necessary. the users table will add around 500 1k rows a week. If this can't be handled by a single server, then if I am to choose a MySQL replication topology, what would be the best setup for this case? Sorry, if I sound vague or the question is too wide. I just don't know what to ask or what do you want to know at this point.

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >