Search Results

Search found 582 results on 24 pages for 'integrity'.

Page 20/24 | < Previous Page | 16 17 18 19 20 21 22 23 24  | Next Page >

  • Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform

    - by Michael Snow
    Openmatics s.r.o. was founded in 2010 as a subsidiary of ZF Friedrichshafen AG, a global player in driveline and chassis technology. Oracle Customer:  Openmatics s.r.o.Location:  Pilsen, Czech RepublicIndustry:  AutomotiveEmployees:  70 Its goal was to develop and operate a flexible, open telematics platform for automotive applications, which is independent from vehicle and component suppliers—recognizing that the fragmented telematics market was not meeting today’s fleet management needs. Openmatics provides a rich product portfolio, and customers can extend the platform, as required, to meet their needs. Partners and third-parties can develop their own applications using the Openmatics’ software development kit and can sell them via the Openmatics app shop.ZF Friedrichshafen AG is a global player in driveline and chassis technology. With 121 production companies and 650 service partners in 26 countries, ZF is among the top 10 largest automotive suppliers worldwide. Founded in 1915 to develop and produce transmissions for airships and vehicles, the group’s product offerings now include transmissions and steering systems as well as chassis components and complete axle systems and modules.  A word from Openmatics s.r.o.  “Oracle WebCenter Portal, together with the underlying Oracle Application Development Framework, provided the fundamental infrastructure for the Openmatics platform. Fleet managers can now reduce fuel consumption and operating costs, and more efficiently manage vehicle usage, maintenance, and safety. The standards-based platform allows third-party suppliers to deploy their own vehicle telematics services as Openmatics apps and creates a de facto standard for the automotive industry, independent from a single manufacturer or service provider.” – Gero Strobel, Head of Development, Openmatics s.r.o. Challenges Create an industry standard for vehicle telematics by establishing a customizable platform that enables access to telematics information, such as current and past fuel consumption, through a web browser to better meet automotive market and customer needs Reduce fleet-management costs by eliminating the need to invest in isolated telematics hardware and software solutions per vehicle brand and vehicle component manufacturer Establish an open platform where third-party providers—such as original equipment manufacturers (OEM), insurers, fleet operators, and individual developers—can deploy their own vehicle telematics services Allow users to purchase targeted telematics services as single apps to reduce costs and ensure rapid growth of telematics services available on the platform Enable users to configure their telematics apps with ease to make sure the platform meets individual fleet management requirements, such as analyzing past and current fuel consumption of a truck fleet Solutions Deployed Oracle WebCenter Portal as a foundation for Openmatics, a standards-based automotive telematics platform that provides next-generation fleet management with unified digital communication from and to vehicles on the move Used Oracle Application Development Framework as the development framework for Oracle WebCenter Portal’s components and services, providing developers with ready-to-use software development kits with application programming interfaces, design templates, and visual tools that accelerated time to market Used Oracle Enterprise Pack for Eclipse to simplify telematics application development in Java Enabled fleet monitoring by recording vehicle data—such as fuel consumption information—through onboard units, delivering the information to Oracle Database, and making it accessible through a customizable app portfolio on any web browser Stored vehicle telematics data—sent as encrypted information—in Oracle Database, ensuring data integrity and immediate availability for the platform’s telematics applications Enabled a wide range of telematics services suppliers, from vehicle component manufacturers to fleet application developers, to offer vehicle telematics services on the Openmatics platform, ensuring platform independence from OEMs Provided Openmatics customers with the means to individually select the automotive telematics services that are relevant to their business requirements, eliminating the need to pay for superfluous information and reducing fleet management costs Oracle Products & Services Oracle Application Development Framework Oracle WebCenter Portal Oracle SOA Suite Oracle Enterprise Pack for Eclipse Oracle Database Oracle Consulting &amp;amp;amp;amp;amp;amp;amp;&amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;gt;amp;&amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;gt;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;gt; &amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;gt;

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 2/2

    - by Sanjeev Sharma
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} In my earlier blog post, I had described the factors that lead to compliance complexity of financial services processes. In this post, I will outline the business implications of the increasing process compliance complexity and the specific role of BPM in addressing the operational risk reduction objectives of regulatory compliance. First, let’s look at the business implications of increasing complexity of process compliance for financial institutions: · Increased time and cost of compliance due to duplication of effort in conforming to regulatory requirements due to process changes driven by evolving regulatory mandates, shifting business priorities or internal/external audit requirements · Delays in audit reporting due to quality issues in reconciling non-standard process KPIs and integrity concerns arising from the need to rely on multiple data sources for a given process Next, let’s consider some approaches to managing the operational risk of business processes. Financial institutions considering reducing operational risk of their processes, generally speaking, have two choices: · Rip-and-replace existing applications with new off-the shelf applications. · Extend capabilities of existing applications by modeling their data and process interactions, with other applications or user-channels, outside of the application boundary using BPM. The benefit of the first approach is that compliance with new regulatory requirements would be embedded within the boundaries of these applications. However pre-built compliance of any packaged application or custom-built application should not be mistaken as a one-shot fix for future compliance needs. The reason is that business needs and regulatory requirements inevitably out grow end-to-end capabilities of even the most comprehensive packaged or custom-built business application. Thus, processes that originally resided within the application will eventually spill outside the application boundary. It is precisely at such hand-offs between applications or between overlaying processes where vulnerabilities arise to unknown and accidental faults that potentially result in errors and lead to partial or total failure. The gist of the above argument is that processes which reside outside application boundaries, in other words, span multiple applications constitute a latent operational risk that spans the end-to-end value chain. For instance, distortion of data flowing from an account-opening application to a credit-rating system if left un-checked renders compliance with “KYC” policies void even when the “KYC” checklist was enforced at the time of data capture by the account-opening application. Oracle Business Process Management is enabling financial institutions to lower operational risk of such process ”gaps” for Financial Services processes including “Customer On-boarding”, “Quote-to-Contract”, “Deposit/Loan Origination”, “Trade Exceptions”, “Interest Claim Tracking” etc.. If you are faced with a similar challenge and need any guidance on the same feel free to drop me a note.

    Read the article

  • Documentation and Test Assertions in Databases

    - by Phil Factor
    When I first worked with Sybase/SQL Server, we thought our databases were impressively large but they were, by today’s standards, pathetically small. We had one script to build the whole database. Every script I ever read was richly annotated; it was more like reading a document. Every table had a comment block, and every line would be commented too. At the end of each routine (e.g. procedure) was a quick integration test, or series of test assertions, to check that nothing in the build was broken. We simply ran the build script, stored in the Version Control System, and it pulled everything together in a logical sequence that not only created the database objects but pulled in the static data. This worked fine at the scale we had. The advantage was that one could, by reading the source code, reach a rapid understanding of how the database worked and how one could interface with it. The problem was that it was a system that meant that only one developer at the time could work on the database. It was very easy for a developer to execute accidentally the entire build script rather than the selected section on which he or she was working, thereby cleansing the database of everyone else’s work-in-progress and data. It soon became the fashion to work at the object level, so that programmers could check out individual views, tables, functions, constraints and rules and work on them independently. It was then that I noticed the trend to generate the source for the VCS retrospectively from the development server. Tables were worst affected. You can, of course, add or delete a table’s columns and constraints retrospectively, which means that the existing source no longer represents the current object. If, after your development work, you generate the source from the live table, then you get no block or line comments, and the source script is sprinkled with silly square-brackets and other confetti, thereby rendering it visually indigestible. Routines, too, were affected. In our system, every routine had a directly attached string of unit-tests. A retro-generated routine has no unit-tests or test assertions. Yes, one can still commit our test code to the VCS but it’s a separate module and teams end up running the whole suite of tests for every individual change, rather than just the tests for that routine, which doesn’t scale for database testing. With Extended properties, one can get the best of both worlds, and even use them to put blame, praise or annotations into your VCS. It requires a lot of work, though, particularly the script to generate the table. The problem is that there are no conventional names beyond ‘MS_Description’ for the special use of extended properties. This makes it difficult to do splendid things such ensuring the integrity of the build by running a suite of tests that are actually stored in extended properties within the database and therefore the VCS. We have lost the readability of database source code over the years, and largely jettisoned the use of test assertions as part of the database build. This is not unexpected in view of the increasing complexity of the structure of databases and number of programmers working on them. There must, surely, be a way of getting them back, but I sometimes wonder if I’m one of very few who miss them.

    Read the article

  • Faster Trip to Innovation with Simplified Data Integration: Sabre Holdings Case Study

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Author: Irem Radzik, Director of Product Marketing, Data Integration, Oracle In today’s fast-paced, competitive environment, IT teams are under pressure to deliver technology solutions for many critical business initiatives as fast as possible. When the focus is on speed, it can be easy to continue to use old style, point-to-point custom scripts that grow organically to the point where they are unmanageable and too costly to maintain. As data volumes, data sources, and end users grow, uncoordinated data integration efforts create significant inefficiencies for both IT and business users. In addition to losing IT productivity due to maintaining spaghetti architecture, data integrity becomes a concern as well. Errors caused by inconsistent, data and manual data entry can prove very costly for companies and disrupt business activities. Many industry leaders recognize now that data should be moved in an automated and reliable manner across all platforms to have one version of the truth. By simplifying their data integration architecture and standardizing on a centralized approach, IT teams now accelerate time to market. Especially, using a centralized, shared-service approach brings agility, increases IT productivity, and frees up resources for innovation. One such industry leader that simplified its data integration architecture is Sabre Holdings. Sabre Holdings provides distribution and technology solutions for the travel industry, and is a winner of Oracle Excellence Awards for Fusion Middleware in 2011 in the data integration category. I had the pleasure to host Sabre Holdings on a public webcast and discuss their data integration best practices for data warehousing. In this webcast Sabre’s Amjad Saeed, presented how the company reduced complexity by consolidating systems and standardizing development on Oracle Data Integrator and Oracle GoldenGate for its global data warehouse development team. With Oracle’s complete real-time data integration solution, Sabre also streamlined support and maintenance operations, achieved real-time view in the execution of the integration processes, and can manage the data warehouse and business intelligence solution performance on demand. By reducing complexity and leveraging timely market insights, the company was able to decrease time to market by 40%. You can now listen to the webcast on demand: Sabre Holdings Case Study: Accelerating Innovation using Oracle Data Integration I invite you to hear directly from Sabre how to use advanced data integration capabilities to enable accelerated innovation. To learn more about Oracle’s data integration offering you can download our free resources.

    Read the article

  • MySQL Server 5.6 default my.cnf and my.ini

    - by user12626240
    We've introduced a default my.cnf / my.ini file for MySQL Server that you can now see in the 5.6.8 release candidate: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M   # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin   # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # socket = ..... # server_id = .....   # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M   sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES    There is also a template file called my-default.cnf or my-default.ini that has these lines near the start: # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL.   On Linux systems, the mysql_install_db command will copy the template file to the final location, where the server will read and use the file, removing the extra three lines. On Windows, the installer will create extra settings based on the answers you gave during installation. Neither will overwrite an existing my.cnf or my.ini file. The only initially active setting here is to change the value of  sql_mode from the server default of NO_ENGINE_SUBSTITUTION to NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES. This strict mode changes warnings for some non-standard behaviour into errors. This can cause applications which rely on the non-standard things, like dates that aren't valid, to lose data. If we had just changed the server default, the new setting would affect all servers that lack an explicit sql_mode setting, including those where strict mode is harmful. So we did it in the default file instead because that will only affect new server installations. You should expect that in our next version after 5.6, the server default will include STRICT_TRANS_TABLES. Our Windows installer and some of our connectors already use STRICT_TRANS_TABLES by default. Strict has been our preferred setting for many years and it is good to see some development platforms are using it. If you need the old behaviour, just remove the STRICT_TRANS_TABLES setting. If you do this, please also ask your application provider to make it unnecessary. They can do that by setting the session sql_mode setting in their own connections, so the rest of the applications using the server don't have to have an undesirable default. We've kept this file as small as possible because we found that our old files were too big and confused people. We've also now removed the old my-huge and related example files. One key part of this is the link to the documentation, where we will provide an introduction to some key settings. We'd like to hear your feedback on settings that will benefit most users or are most important to call out for existing users. Please do that by commenting here or if you prefer by adding comments to this bug report.

    Read the article

  • Challenges in Corporate Reporting - New Independent Research

    - by ndwyouell
    Earlier this year, Oracle and Accenture sponsored a global study on trends in financial close and reporting. We surveyed 1,123 finance professionals in large organizations in 12 countries around the world during February and March. Financial Consolidation and Reporting is the most mature aspect of Enterprise Performance Management with mainstream solutions having been around for over 30 years. But of course over this time there have been many changes and very significant increases in regulation. So just what is the current state is Financial Consolidation and Reporting in our major corporations across the world? We commissioned this independent research to find out. Highlights of the result are: •          Seeking change: Businesses recognize they need to invest in financial reporting to address the challenges they currently face. 47 percent of companies have made substantial investments over the last year to the financial close, filing, and reporting processes. •          Ineffective investments: Despite these investments, spreadsheets (72 percent) and e-mails (68 percent) are still being used daily to track and manage reporting, suggesting that new investments are falling short of expectations. •          Increased costs and uncertainty: The situation is so opaque that managers across the finance function are unable to fully understand the financial impact or cost implications of reporting, with 60 percent of respondents admitting they did not know the total cost of managing and publicizing their financial results. •          Persistent challenges: 68 percent of respondents admitted that they have inadequate visibility into reporting processes, while 84 percent of finance managers surveyed said they find it difficult to control the quality of financial data across the entire reporting process. •          Decreased effectiveness: 71 percent of finance managers feel their effectiveness is limited in some way by data-analysis–related issues, while 39 percent of C-level or VP-level respondents say their effectiveness is impaired by limited visibility. •          Missed deadlines: Due to late changes to the chart of accounts, 15 percent of global businesses have missed statutory filings, putting their companies at risk of financial penalties and potentially impacting share value. The report makes it clear that investments made to date by these large organizations around the world have been uneven across the close, reporting, and filing processes, which has led to the challenges these organizations currently face in the overall process. Regardless of whether companies are using a variety of solutions or a single solution, the report shows they continue to witness increased costs, ineffectual data management, and missed reporting, which—in extreme circumstances—can impact a company’s corporate image and share value. The good news is that businesses realize that these problems persist and 86 percent of companies are likely to make a significant investment during the next five years to address these issues. While they should invest, it is critical that they direct investments correctly to address the key issues this research identified: •          Improving data integrity •          Optimizing processes •          Integrating the extended financial close process By addressing these issues and with clear guidance on how to implement the correct business processes, infrastructure, and software solutions, finance teams will find that their reporting processes are much more effective, cost-efficient, and aligned with their performance expectations. To get a copy of the full report: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=11747758&src=7300117&Act=92 To replay a webcast discussing the findings: http://www.cfo.com/webcast.cfm?webcast=14639438&pcode=ORA061912_ORA

    Read the article

  • ZFS Storage Appliance ? ldap ??????

    - by user13138569
    ZFS Storage Appliance ? Openldap ????????? ???ldap ?????????????? Solaris 11 ? Openldap ????????????? ??? slapd.conf ??ldif ?????????? user01 ??????? ?????? slapd.conf # # See slapd.conf(5) for details on configuration options. # This file should NOT be world readable. # include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/nis.schema # Define global ACLs to disable default read access. # Do not enable referrals until AFTER you have a working directory # service AND an understanding of referrals. #referral ldap://root.openldap.org pidfile /var/openldap/run/slapd.pid argsfile /var/openldap/run/slapd.args # Load dynamic backend modules: modulepath /usr/lib/openldap moduleload back_bdb.la # moduleload back_hdb.la # moduleload back_ldap.la # Sample security restrictions # Require integrity protection (prevent hijacking) # Require 112-bit (3DES or better) encryption for updates # Require 63-bit encryption for simple bind # security ssf=1 update_ssf=112 simple_bind=64 # Sample access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # Directives needed to implement policy: # access to dn.base="" by * read # access to dn.base="cn=Subschema" by * read # access to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! ####################################################################### # BDB database definitions ####################################################################### database bdb suffix "dc=oracle,dc=com" rootdn "cn=Manager,dc=oracle,dc=com" # Cleartext passwords, especially for the rootdn, should # be avoid. See slappasswd(8) and slapd.conf(5) for details. # Use of strong authentication encouraged. rootpw secret # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. directory /var/openldap/openldap-data # Indices to maintain index objectClass eq ?????????ldif???? dn: dc=oracle,dc=com objectClass: dcObject objectClass: organization dc: oracle o: oracle dn: cn=Manager,dc=oracle,dc=com objectClass: organizationalRole cn: Manager dn: ou=People,dc=oracle,dc=com objectClass: organizationalUnit ou: People dn: ou=Group,dc=oracle,dc=com objectClass: organizationalUnit ou: Group dn: uid=user01,ou=People,dc=oracle,dc=com uid: user01 objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: user01 uidNumber: 10001 gidNumber: 10000 homeDirectory: /home/user01 userPassword: secret loginShell: /bin/bash shadowLastChange: 10000 shadowMin: 0 shadowMax: 99999 shadowWarning: 14 shadowInactive: 99999 shadowExpire: -1 ldap?????????????ZFS Storage Appliance??????? Configuration SERVICES LDAP ??Base search DN ?ldap??????????? ???? ldap ????????? user01 ???????????????? ???????????? user ????????? Unknown or invalid user ?????????????????? ????????????????Solaris 11 ???????????? ????????????? ldap ????????getent ??????????????? # svcadm enable svc:/network/nis/domain:default # svcadm enable ldap/client # ldapclient manual -a authenticationMethod=none -a defaultSearchBase=dc=oracle,dc=com -a defaultServerList=192.168.56.201 System successfully configured # getent passwd user01 user01:x:10001:10000::/home/user01:/bin/bash ????????? user01 ?????????????? # mount -F nfs -o vers=3 192.168.56.101:/export/user01 /mnt # su user01 bash-4.1$ cd /mnt bash-4.1$ touch aaa bash-4.1$ ls -l total 1 -rw-r--r-- 1 user01 10000 0 May 31 04:32 aaa ?????? ldap ??????????????????????????!

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    Storyline Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A long time ago in a galaxy far, far away... Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. A New Hope The episode began in a remote shell known as bash, with both databases running, and the installation of a command with a most unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • How to reinstall OEM Windows 98 SE?

    - by Sammy
    I'm trying to install Windows 98 SE on an old PC and it's not going well. I run into this problem. Searching for Boot Record from Floppy..OK Starting Windows 98... TOSHIBA Enhanced-IDE CD/DVD-ROM Device Driver (ATAPI) Version 2.24 (C)Copyright Toshiba Corp. 1995-1999. All rights reserved. Device Name : TOSCD001 Number of units : 1 MSCDEX Version 2.25 Copyright (C) MIcrosoft Corp. 1986-1995. All rights reserved. Drive Z: = Driver TOSCD001 unit 0 TOSHIBA MACHINE Invalid drive specification Path not found - C:\TOOLS\CDROMDRV.SYS Invalid drive specification Invalid drive specification After that last line, it leaves me at a bitmap image displaying instructions to reboot with Ctrl+Alt+Del. It doesn't say why I have to reboot, and it doesn't state any error type, it just want's me to reboot for no apparent reason. After reboot, it just boots up from Floppy again and it cycles through the same thing all over again. The computer has been restored to original specification. Original system recovery "CD-ROM" discs are available and they are not scratched or anything, they are in very good condition. It's a set of 3 CDs, and the first disc labeled "1/3" should be the one holding the OEM version of Windows 98 SE. There is also a boot disk for Windows 98. I'm not sure what the other two discs are for. This computer came with three language support, so those could be holding different language versions or additional OEM discs. But I'm quite sure that the first disc holds the main operating system. BIOS has been set to optimized defaults. Boot priority is as follows; Floppy, IDE-0, CD-ROM. Under Standard CMOS settings, BIOS scans and autoconfigures both the hard drive and the CD/DVD drive. On POST it finds them both, and it finds the DOS bootdisk and starts preparing for installation, as you can see above. So what's this "invalid drive specification" about? Why isn't the installation starting? Updates Update 1 Booting from CD disc 2 In desperation I tried booting from the second CD. Boot order was; Floppy, CD-ROM, IDE-0. It boots normally from floppy disk, just like above, but then returns following. File not found - Z:\3LNGINST\TOOLS\PARTINFO.TXT I accidentally pressed some key on the keyboard, and before I knew it, the following screen showed up. Create Primary DOS Partition Current fixed disk drive: 1 Verifying drive integrity, 16% complete. After completion another screen showed up. Create Primary DOS Partition Current fixed disk drive: 1 Do you wish to use the maximum available size for a Primary DOS Partition and make the partition active (Y/N)?....................? [Y] Verifying drive integrity, 7% complete. I didn't choose Yes, it was set automatically. After completion the computer was automatically rebooted. Then I got a new screen. This is in Norwegian/Swedish/Finnish. Here's the message in Swedish. Hårddisken är inte klar för återställning av programvara. Installationsprogrammet måste skapa nya partitioner (C:, D:, ...). VARNING! ALLT INNEHÅLL PÅ HÅRDDISKEN KOMMER ATT RADERAS! Tryck på en tangent om du vill fortsätta (eller CTRL-C för att avbryta). Let me translate that. Hard drive is not ready for restoring the software. Setup program has to create new partitions (C:, D:, ...). WARNING! ALL CONTENTS ON THE HARD DRIVE WILL BE ERASED! Press any key to continue (or CTRL-C to cancel). I pressed Enter and it started formatting the hard drive. WARNING, ALL DATA ON NON-REMOVABLE DISK DRIVE c: WILL BE LOST! Proceed with Format (Y/N)?y Formatting 14,67.53M 1 percent completed. It automatically sets the "y" option and starts formatting. Rebooting with CD disc 1 After completing this operation it rebooted automatically. I inserted CD disc 1 and there was no issue with "invalid drive specification" anymore. Instead, a bitmap menu was displayed where it asked me to choose a language. And I thought I had it there for a while but it didn't work out. After choosing the language, another menu was displayed asking me to choose a type of recovery (restore pre-installed software OR restore hard drive partitions and pre-installed software). I opted for the second option. Then a data destruction warning showed up where I just pressed 1 to Continue. It did something and then just rebooted and the same formatting screen shows up as before. So something is not right. Am I doing it wrong? I seem to have come past the CD-ROM driver issue at least. But now I'm stuck with this problem... it seems to have something to do with the hard drive. Like... why is is it always trying to format it? Isn't it enough to format it once? By the way, it needs to be formatted as FAT32, right? Windows 98 doesn't support NTFS? I think FDISK should have taken care of this already. I know this is an old hard drive, but I connected to my main computer and it was able to read and write to it without a problem. It does have bad sectors though, but it's expected on an old hard drive like this. Any ideas?.. Update 2 I seem to be repeatedly getting stuck at the format screen where it asks to press any key to continue. So tried to cancel it this time with Ctrl+C. This leaves me at: A:\TOOLS> I can do DIR and CD and I tried to change to Z: drive. I tried running "setup" but there is no such thing. Z:\>setup Bad command or file name Update 3 Floppy structure Here's the file/folder structure of the floppy disk. A:\>dir /s Volume in drive A has no label. Volume Serial Number is 1700-1069 Directory of A:\ 1999-10-11 10:44 <DIR> BMP 1998-05-11 22:01 93 880 COMMAND.COM 1999-10-11 10:44 <DIR> factory 1999-10-11 10:44 <DIR> lang 1999-10-11 10:44 <DIR> TOOLS 2000-05-19 15:32 339 CONFIG.SYS 1999-10-26 13:38 0 BOOTLOG.TXT 2000-06-08 08:32 3 691 AUTOEXEC.BAT 4 File(s) 97 910 bytes Directory of A:\BMP 1999-10-11 10:44 <DIR> . 1999-10-11 10:44 <DIR> .. 0 File(s) 0 bytes Directory of A:\factory 1999-10-11 10:44 <DIR> . 1999-10-11 10:44 <DIR> .. 2000-06-08 13:09 2 662 3LNGINSF.BAT 1 File(s) 2 662 bytes Directory of A:\lang 1999-10-11 10:44 <DIR> . 1999-10-11 10:44 <DIR> .. 1998-11-24 08:02 49 575 FORMAT.COM 1998-11-24 08:02 63 900 FDISK.EXE 2 File(s) 113 475 bytes Directory of A:\TOOLS 1999-10-11 10:44 <DIR> . 1999-10-11 10:44 <DIR> .. 1998-05-06 22:01 49 575 FORMAT.COM 1995-10-27 20:29 28 164 BMPVIEW.EXE 1999-01-26 15:54 15 MAKEPA32.TXT 1998-05-06 22:01 3 878 XCOPY.EXE 1998-05-06 22:01 41 472 XCOPY32.MOD 1998-05-06 22:01 33 191 HIMEM.SYS 1998-05-06 22:01 125 495 EMM386.EXE 1998-05-06 22:01 18 967 SYS.COM 1996-01-31 21:55 18 CLK.COM 1994-04-02 08:20 22 HARDBOOT.COM 1999-02-03 15:46 15 MAKEPA16.TXT 1999-04-14 16:36 7 840 PARTFO32.EXE 2000-05-19 15:01 1 169 PARTFORM.BAT 1996-10-02 01:47 1 642 MBRCLR.COM 1999-07-01 11:58 8 175 BIOSCHKN.EXE 1998-06-23 08:55 5 904 PAR-TYPE.EXE 1998-11-24 08:02 29 271 MODE.COM 1998-11-24 08:02 15 252 ATTRIB.EXE 1998-11-24 08:02 19 083 DELTREE.EXE 1999-04-21 15:01 23 304 NTBB.EXE 1997-05-07 14:19 1 SYS.TXT 1999-07-01 12:23 61 566 F3DCHK.EXE 1998-05-11 20:01 34 566 KEYBOARD.SYS 1998-05-11 20:01 19 927 KEYB.COM 1999-10-26 14:31 910 partinfo.txt 1998-06-16 15:58 5 936 CHKDRVAC.EXE 1998-05-06 22:01 63 900 FDISK.EXE 1998-05-06 22:01 45 379 SMARTDRV.EXE 1992-12-03 19:48 10 695 SCISET.EXE 1997-06-25 15:49 6 YENT 1998-05-06 22:01 25 473 MSCDEX.EXE 1998-05-06 22:01 5 239 CHOICE.COM 1997-07-18 17:41 6 876 MBR.COM 1997-07-01 15:01 6 545 CHK2GB.COM 1998-06-10 20:04 8 128 PARTFORM.EXE 1990-01-04 02:09 19 MAKEPAR2.TXT 1990-01-04 01:00 27 MAKEPAR3.TXT 1990-01-04 01:00 27 MAKEPAR4.TXT 1998-02-13 13:47 15 MAKEPART.TXT 1999-04-14 13:47 5 200 DISKSIZE.EXE 1999-05-06 14:56 7 856 PARTFO16.EXE 1999-01-13 11:13 13 720 CDROMDRV.SYS 42 File(s) 734 463 bytes Total Files Listed: 49 File(s) 948 510 bytes 12 Dir(s) 268 800 bytes free A:\> CONFIG.SYS contents Here's the content of CONFIG.SYS. DEVICE=A:\TOOLS\HIMEM.SYS /TESTMEM:OFF REM I=B000-B7ff for Desktop BIOSes rem DEVICE=A:\TOOLS\EMM386.EXE NOEMS I=B000-B7ff x=C000-D000 DEVICE=A:\TOOLS\EMM386.EXE NOEMS x=C000-D000 DEVICE=A:\TOOLS\CDROMDRV.SYS /D:TOSCD001 BUFFERS=10 FILES=69 DOS=HIGH,UMB STACKS=9,256 LASTDRIVE=Z SWITCHES=/F SHELL=A:\COMMAND.COM /P /E:2048 AUTOEXEC.BAT contents :BEGIN @ECHO OFF PATH=A:\;A:\TOOLS; MSCDEX /D:TOSCD001 /L:Z /M:10 smartdrv 1024 128 SET TOOLS=A:\TOOLS SET COMSPEC=A:\COMMAND.COM SET EXITDRIVE=C: SET EXITPATH=\ CALL Z:\SETENV.BAT > NUL :TOSHCHK BIOSChkN IF NOT ERRORLEVEL 1 goto C_ACCESS BMPVIEW Z:\3LNGINST\BMP\NO_TOSP3.bmp /X=120 /Y=80 PAUSE > NUL SET EXITDRIVE=A: GOTO END :C_ACCESS CALL PARTFORM.BAT :C_EMPTY IF EXIST C:\*.* GOTO C_NOTEMPTY call z:\setenv.bat>nul goto PREPDU :C_NOTEMPTY REM ------------------MENU------------------------ :STARTMENU CLS BMPVIEW Z:\3LNGINST\BMP\LANGSELC.BMP /X=120 /Y=120 CLK CHOICE /C:123 /N >NUL REM L is the language that is selected IF ERRORLEVEL 1 SET L=%LNG1% IF ERRORLEVEL 2 SET L=%LNG2% IF ERRORLEVEL 3 SET L=%LNG3% SET BMP=BMP%L% BMPVIEW Z:\3LNGINST\%bmp%\HDDMENU.BMP /X=72 /Y=82 CLK CHOICE /C:129F /N > NUL IF ERRORLEVEL 4 GOTO FACTORY_MENU IF ERRORLEVEL 3 GOTO EXIT_MENU IF ERRORLEVEL 2 GOTO PARTFORM_MENU IF ERRORLEVEL 1 GOTO FORMAT_MENU GOTO END :FACTORY_MENU BMPVIEW Z:\3LNGINST\%bmp%\qformat.bmp /X=120 /Y=140 CLK choice /c:12 /N >nul IF ERRORLEVEL 2 GOTO STARTMENU IF ERRORLEVEL 1 GOTO FORMATF GOTO END :EXIT_MENU BMPVIEW Z:\3LNGINST\%bmp%\9.bmp /XC /X=96 /Y=267 choice /C:1pause /T:1,01 >nul SET EXITDRIVE=A: SET EXITPATH=\lang cls mode mono rem keyb xx>nul cls GOTO END :PARTFORM_MENU BMPVIEW Z:\3LNGINST\%bmp%\2.bmp /XC /X=96 /Y=216 choice /C:1pause /T:1,01 >nul BMPVIEW Z:\3LNGINST\%bmp%\partform.bmp /X=120 /Y=140 CLK choice /c:12 /N >nul IF ERRORLEVEL 2 GOTO STARTMENU IF ERRORLEVEL 1 GOTO PART_FORM SET EXITDRIVE=A: GOTO END :FORMAT_MENU BMPVIEW Z:\3LNGINST\%bmp%\1.bmp /XC /X=96 /Y=165 choice /C:1pause /T:1,01 >nul BMPVIEW Z:\3LNGINST\%bmp%\qformat.bmp /X=120 /Y=140 CLK choice /c:12 /N >nul IF ERRORLEVEL 2 GOTO STARTMENU IF ERRORLEVEL 1 GOTO FORMAT SET EXITDRIVE=A: GOTO END REM ------------------ MENU END ------------------------ :FORMAT bmpview Z:\3LNGINST\%bmp%\1.bmp /XC /X=145 /Y=235 choice /C:1pause /T:1,01 >nul CLS IF (%QFORMAT%)==(NO) GOTO FULLFO FORMAT C: /Q /V:"" <A:\TOOLS\YENT >NUL call z:\setenv.bat>nul goto PREPDU :FULLFO FORMAT C: /V:"" <A:\TOOLS\YENT call z:\setenv.bat>nul goto PREPDU :FORMATF CLS IF (%QFORMAT%)==(NO) GOTO FULLFO_F FORMAT C: /Q /V:"" <A:\TOOLS\YENT >NUL call z:\setenv.bat>nul goto PREPDU_F :FULLFO_F FORMAT C: /V:"" <A:\TOOLS\YENT call z:\setenv.bat>nul goto PREPDU_F :PART_FORM bmpview Z:\3LNGINST\bmp\1.bmp /XC /X=145 /Y=235 choice /C:1pause /T:1,01 >nul MBR /! HARDBOOT REM ====================== Triple Select ====================== :PREPDU XCOPY z:\3LNGINST\*.* C:\*.* /E /S /V >NUL ATTRIB -H -R -S C:\TOOLS\CDROMDRV.SYS COPY A:\TOOLS\CDROMDRV.SYS C:\TOOLS /Y SYS C: >NUL goto REBOOT :PREPDU_F copy A:\TOOLS\SMARTDRV.EXE C:\ /Y ATTRIB -H -R -S C:\SMARTDRV.EXE copy A:\FACTORY\3LNGINSF.bat c:\ c:\3LNGINSF.bat cls REM ====================== Dual Select END ====================== REM --------------- END ------------------ :REBOOT SMARTDRV.EXE /C bmpview Z:\3LNGINST\BMP\reboot3.bmp /X=120 /Y=140 :FOREVER pause >nul goto FOREVER :END SMARTDRV.EXE /C %EXITDRIVE% cd %EXITPATH% echo on CD structure S:\>dir /s Volume in drive S is T3ELK4SC Volume Serial Number is 2042-5BC9 Directory of S:\ 2000-08-22 14:14 <DIR> 3LNGINSF 2000-08-22 14:14 <DIR> 3LNGINST 2000-06-15 15:57 <DIR> CRC 2000-06-15 12:04 387 667 767 T310C1NO.W98 2000-09-07 15:36 273 setenv.BAT 2 File(s) 387 668 040 bytes Directory of S:\3LNGINSF 1601-01-01 02:00 <DIR> . 1601-01-01 02:00 <DIR> .. 1999-10-27 10:51 1 806 AUTOEXEC.BAT 2000-08-22 14:14 <DIR> BMP 2000-05-19 15:29 265 CONFIG.SYS 2000-08-22 14:14 <DIR> POSTINST 2000-08-22 14:14 <DIR> TOOLS 2000-08-22 14:14 <DIR> WIN98SYS 2 File(s) 2 071 bytes Directory of S:\3LNGINSF\BMP 1601-01-01 02:00 <DIR> . 1601-01-01 02:00 <DIR> .. 1997-04-22 09:43 718 1.BMP 1997-04-22 09:44 718 2.BMP 1999-01-04 02:38 718 3.BMP 2000-07-05 11:22 60 118 Cdchg2.bmp 2000-07-05 11:22 60 118 Cdchg3.bmp 2000-07-05 13:37 60 118 Fin.bmp 2000-07-06 14:18 120 118 Menu.bmp 2000-07-05 13:34 60 118 Nor.bmp 2000-07-05 11:53 35 318 Progress.bmp 2000-07-05 13:40 60 118 Swe.bmp 2000-07-05 12:09 84 118 Wrongcd2.bmp 2000-07-05 12:09 84 118 Wrongcd3.bmp 12 File(s) 626 416 bytes Directory of S:\3LNGINSF\POSTINST 1601-01-01 02:00 <DIR> . 1601-01-01 02:00 <DIR> .. 2000-05-19 09:15 33 POSTINST.BAT 1 File(s) 33 bytes Directory of S:\3LNGINSF\TOOLS 1601-01-01 02:00 <DIR> . 1601-01-01 02:00 <DIR> .. 2000-07-06 14:49 3 593 3LNGINST.BAT 1998-11-24 08:02 15 252 ATTRIB.EXE 1995-10-27 18:29 28 164 BMPVIEW.EXE 1999-01-13 11:13 13 720 CDROMDRV.SYS 1998-05-06 20:01 5 239 CHOICE.COM 1996-01-31 19:55 18 CLK.COM 1998-11-24 08:02 19 083 DELTREE.EXE 1998-05-06 20:01 125 495 EMM386.EXE 1999-07-01 12:23 61 566 F3DCHK.EXE 1998-05-06 20:01 49 575 FORMAT.COM 1994-04-02 06:20 22 HARDBOOT.COM 1998-05-06 20:01 33 191 HIMEM.SYS 1998-05-06 20:01 25 473 MSCDEX.EXE 1998-05-06 20:01 12 663 RAMDRIVE.SYS 1998-05-06 20:01 45 379 SMARTDRV.EXE 1997-05-07 14:19 1 SYS.TXT 1995-09-27 14:25 6 813 VOLCHECK.EXE 1998-05-06 20:01 3 878 XCOPY.EXE 1998-05-06 20:01 41 472 XCOPY32.MOD 1997-06-25 13:49 6 YENT 20 File(s) 490 603 bytes Directory of S:\3LNGINSF\WIN98SYS 1601-01-01 02:00 <DIR> . 1601-01-01 02:00 <DIR> .. 1998-12-04 20:00 222 390 IO.SYS 1998-05-06 20:01 18 967 SYS.COM 1998-05-06 20:01 93 880 command.com 3 File(s) 335 237 bytes Directory of S:\3LNGINST 1601-01-01 02:00 <DIR> . 1601-01-01 02:00 <DIR> .. 1999-05-31 09:51 1 576 AUTOEXEC.BAT 2000-08-22 14:14 <DIR> BMP 2000-08-22 14:14 <DIR> Bmpfin 2000-08-22 14:14 <DIR> Bmpnor 2000-08-22 14:14 <DIR> Bmpswe 2000-05-19 15:30 265 CONFIG.SYS 2000-08-22 14:14 <DIR> POSTINST 2000-08-22 14:14 <DIR> TOOLS 2000-08-22 14:14 <DIR> WIN98SYS 2 File(s) 1 841 bytes Directory of S:\3LNGINST\BMP 1601-01-01 02:00 <DIR> . 1601-01-01 02:00 <DIR> .. 1997-04-22 09:43 718 1.BMP 1997-04-22 09:44 718 2.BMP 1999-01-04 02:38 718 3.BMP 2000-07-05 11:22 60 118 Cdchg2.bmp 2000-07-05 11:22 60 118 Cdchg3.bmp 2000-07-05 13:37 60 118 Fin.bmp 2000-07-06 14:18 120 118 Menu.bmp 2000-07-05 13:34 60 118 Nor.bmp 2000-07-05 11:53 35 318 Progress.bmp 2000-07-06 14:08 40 518 Reboot3.bmp 2000-07-05 13:40 60 118 Swe.bmp 2000-07-05 12:09 84 118 Wrongcd2.bmp 2000-07-05 12:09 84 118 Wrongcd3.bmp 2000-07-05 13:52 48 118 langselc.bmp 2000-07-05 11:47 57 318 no_tosp3.bmp 15 File(s) 772 370 bytes Directory of S:\3LNGINST\Bmpfin 1601-01-01 02:00 <DIR> . 1601-01-01 02:00 <DIR> .. 1997-04-22 09:43 718 1.BMP 1997-04-22 09:44 718 2.BMP 1998-06-13 00:07 718 9.bmp 2000-03-08 15:02 78 486 Hddmenu.bmp 2000-03-08 15:31 25 318 No_tospc.bmp 2000-03-08 15:37 36 518 PARTFORM.BMP 2000-03-08 15:42 36 518 Qformat.bmp 7 File(s) 178 994 bytes Directory of S:\3LNGINST\Bmpnor 1601-01-01 02:00 <DIR> . 1601-01-01 02:00 <DIR> .. 1997-04-22 09:43 718 1.BMP 1997-04-22 09:44 718 2.BMP 1998-06-13 00:07 718 9.bmp 1999-05-05 13:26 78 486 Hddmenu.bmp 1998-07-13 11:36 25 318 No_tospc.bmp 1998-07-13 11:41 36 518 PARTFORM.BMP 1998-07-13 11:45 36 518 Qformat.bmp 7 File(s) 178 994 bytes Directory of S:\3LNGINST\Bmpswe 1601-01-01 02:00 <DIR> . 1601-01-01 02:00 <DIR> .. 1997-04-22 09:43 718 1.BMP 1997-04-22 09:44 718 2.BMP 1998-06-13 00:07 718 9.bmp 1999-05-06 08:14 78 486 Hddmenu.bmp 1998-07-10 16:25 25 318 No_tospc.bmp 1998-07-10 16:29 36 518 PARTFORM.BMP 1998-07-10 17:08 36 518 Qformat.bmp 7 File(s) 178 994 bytes Directory of S:\3LNGINST\POSTINST 1601-01-01 02:00 <DIR> . 1601-01-01 02:00 <DIR> .. 2000-05-19 09:15 33 POSTINST.BAT 1 File(s) 33 bytes Directory of S:\3LNGINST\TOOLS 1601-01-01 02:00 <DIR> . 1601-01-01 02:00 <DIR> .. 2000-05-19 14:52 3 898 3LNGINST.BAT 1995-10-27 18:29 28 164 BMPVIEW.EXE 1999-01-13 11:13 13 720 CDROMDRV.SYS 1998-05-06 20:01 5 239 CHOICE.COM 1996-01-31 19:55 18 CLK.COM 1998-05-06 20:01 125 495 EMM386.EXE 1999-07-01 12:23 61 566 F3DCHK.EXE 1998-05-06 20:01 49 575 FORMAT.COM 1994-04-02 06:20 22 HARDBOOT.COM 1998-05-06 20:01 33 191 HIMEM.SYS 1998-05-06 20:01 25 473 MSCDEX.EXE 2000-07-06 14:41 910 PARTINFO.TXT 1998-05-06 20:01 12 663 RAMDRIVE.SYS 1998-05-06 20:01 45 379 SMARTDRV.EXE 1997-05-07 14:19 1 SYS.TXT 1995-09-27 14:25 6 813 VOLCHECK.EXE 1998-05-06 20:01 3 878 XCOPY.EXE 1998-05-06 20:01 41 472 XCOPY32.MOD 1997-06-25 13:49 6 YENT 19 File(s) 457 483 bytes Directory of S:\3LNGINST\WIN98SYS 1601-01-01 02:00 <DIR> . 1601-01-01 02:00 <DIR> .. 1998-12-04 20:00 222 390 IO.SYS 1998-05-06 20:01 18 967 SYS.COM 1998-05-06 20:01 93 880 command.com 3 File(s) 335 237 bytes Directory of S:\CRC 1601-01-01 02:00 <DIR> . 1601-01-01 02:00 <DIR> .. 2000-06-15 12:07 181 422 T310C1NO.ALL 2000-06-15 12:09 215 427 T310C1NO.CRC 2000-06-15 12:07 2 157 T310C1NO.HID 3 File(s) 399 006 bytes Total Files Listed: 104 File(s) 391 625 352 bytes 42 Dir(s) 0 bytes free S:\> Now which line or lines need to be changed? Do I really have to change drive letter Z: to C:? Proposed solutions Solution #1 Ramhound proposed to change the boot order to following; CD-ROM, IDE-0, Floppy This didn't help. In fact, here is the result of it. Searching for Boot Record from CDROM..Not Found Searching for Boot Record from IDE-0.. OK Missing operating system Any other ideas?... Solution #2 Rik proposed to run Z:\setup. Now that I have found a way to drop to DOS prompt with Ctrl+C as described above (Update 2), I did try running setup but there is no such command or file in there. So that didn't work.

    Read the article

  • Intermittent Disconnection of Client Computers from Domain Server

    - by dilip nagle
    The Background: I have Windows 2008 server Enterprise Version with 25 user cal licences. It has a domain and all users and a network shared HP printer in it. The Server has two network cards and both these cards as well as all client machines are on IP addressing scheme of 192.168.1.* with subnetmask 255.255.255.0. Of the two network cards viz. 192.168.1.231 and 192.168.1.233, only 192.168.1.231 is registered with DNS. In 192.168.1.233(i.e. 2nd network card) has default getway as 192.168.1.231 and dns address as 192.168.1.231. The Server has three hard disks with capacities as 500gb, 500gb and 1TB and are partitioned as (C,D,E), (F,G) and (K) with partition K having all user data into various Shared Folders. Each of these folders(On Partition K), are mapped onto each user's computer as per the right of access given to them. The Problem: The Server was installed about 6 months ago and till date not even once, the Server has Hung or has given any problem. All the Clients computers are able to run the web based software from their computers via ip address, e.g. http://192.168.1.231/webERP/default.aspx. However, occassionally, when any client computer tries to browse network mappings, it hangs. Again, there is no fixed pattern. This may happen after running smoothly for say 3 days. On each Client's machine, the network settings are as follows: IP Address: 192.168.1.* where * is 1,2,3 .... Sunnetmask: 255.255.255.0 defauly getway: 192.168.1.231 Which is a server card and DNS address. preferred DNS Server: 192.168.1.231 In Advanced Tab under Wins: LMHostLookup is Unticked and default is radio buttoned. Ideally, I would have loved to have Disabled NETBIOS over TCP/IP but some network printers do not get accessed if this option is enabled(ie. Radio Buttoned). Bacause Disabling Netbios will drastically reduce traffic of NETBIOS broadcasting to all the computers on the net to do naming resolution. On Server, I have WINs Running which I have Scavanged Records, verified Database Integrity etc, removed Tombstoned Records etc. The Critical Errors shown only once a day when the server is statred are 4224(WINS) and 12923 - Server Licencing failed to Update DNS Record. I fail to understand as why do client machines HANG when they try to browse mapped network shared folders on K Drive. Kindly Advice

    Read the article

  • File Not Found error launching Guest OS in a non-administrator user account

    - by ToreTrygg
    Hi, I am running Fusion 2.0.6 (196839) on an iMac running 10.6.2 with 3 user accounts (1 Administrator). I have Fusion set up to share the Guest OS, and it's been working splendidly for nearly a year. Within the Guest OS (Windows XP PRO), there are also 3 user accounts (1 Administrator). Last night I went to back up my VM to an external drive, and to minimize the file size and transfer time, I deleted all Snapshots but the most-recent one. I then backed up the VM externally (28.23 GB). Today, one of my users tried to launch the Guest OS from within her user account, and received the following error message: "File Not Found: Windows XP Pro-000006.vmdk "This file is required to power on this virtual machine. If this file was moved, please provide its new location." My two choices are Cancel and Browse. When I browse, I can locate the Windows XP Pro-000006.vmdk file, which appears to be contained within the VM file (Windows XP Pro.vmwarevm). However, it still won't launch from a non-Admin user account. If I view the package contents of the VM file from the user account, the above file is present and appears to be created upon each launch of the Guest OS. If I go back to my Administrator account on the Mac and then launch Fusion, the Guest OS works perfectly for all 3 user accounts within XP Pro. I have tried to delete the Guest OS from Fusion's Library within the problem user account, then re-connect it to that Library, but the result is the same. The Guest OS data integrity is 100% -- but is accessible only from the OS X Administrator account. This problem only surfaced after deleting several older Snapshots. Again, the data is there, the Guest OS powers up normally in the Mac's Administrator account, but persistently returns the above error when attempting to power on from a non-Admin account on the Mac. I'm not sure how this is affecting the error, but when I look at Hard Disk Settings, the "unable-to-locate" file is the filename of the virtual HD. I don't want to make any changes to my (working) VM without any advice from the knowledgeable people on this forum. Any help will be greatly appreciated, thanks!

    Read the article

  • Unable to locate Windows Server Error log files

    - by Sam007
    I am getting an error in my application on 500 Internal Server Error. The firebug gives me, NetworkError: 500 Internal Server Error - http://webgis.arizona.edu/ArcGIS/rest/services/webGIS/Shock_Models/GPServer/Income_Log/jobs/jc09c501156564f71abc5d98393581267/results/final_shp?dpi=96&transparent=true&format=png8&imageSR=102100&f=image&bbox=%7B%22xmin%22%3A-14519891.438356264%2C%22ymin%22%3A637618.0139790997%2C%22xmax%22%3A-6692739.741956295%2C%22ymax%22%3A6507981.786279075%2C%22spatialReference%22%3A%7B%22wkid%22%3A102100%7D%7D&bboxSR=102100&size=800%2C600 And when I goto that particular link, I get this error, Server Error - Object reference not set to an instance of an object. Any idea how I can correct it? UPDATE I was told to use fiddler and see the error details that is occurring over the network and this is the output that I got, SESSION STATE: Done. Response Entity Size: 849 bytes. == FLAGS ================== BitFlags: [ClientPipeReused, ServerPipeReused] 0x18 X-CLIENTPORT: 2010 X-RESPONSEBODYTRANSFERLENGTH: 849 X-EGRESSPORT: 2023 X-HOSTIP: 128.196.53.161 X-PROCESSINFO: firefox:2248 X-CLIENTIP: 127.0.0.1 X-SERVERSOCKET: REUSE ServerPipe#2 == TIMING INFO ============ ClientConnected: 15:53:51.383 ClientBeginRequest: 15:53:51.494 GotRequestHeaders: 15:53:51.494 ClientDoneRequest: 15:53:51.494 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 15:52:45.077 FiddlerBeginRequest: 15:53:51.495 ServerGotRequest: 15:53:51.495 ServerBeginResponse: 15:53:51.679 GotResponseHeaders: 15:53:51.679 ServerDoneResponse: 15:53:51.679 ClientBeginResponse: 15:53:51.679 ClientDoneResponse: 15:53:51.679 Overall Elapsed: 00:00:00.1850106 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2] * Note: Data above shows WinINET's current cache state, not the state at the time of the request. * Note: Data above shows WinINET's Medium Integrity (non-Protected Mode) cache only. But I am still confused as to what the error is? This is the application. I am not sure if the error is due to the ArcGIS-Server or the Windows Server 2008. I am new on working with the Windows Server and wanted to know where can I look for the error lof files? This is the link which gives the details and the log info of the job executed. This is the output.

    Read the article

  • Recommendations or advice for shared computer control

    - by Telemachus
    Basic scenario: we are a school (overwhelmingly Mac, some Windows machines via BootCamp), and we are considering using DeepFreeze to guard the state of our shared machines. We have roughly 250 machines that are either shared laptops (which move around quite a bit) or common desktops in public spaces. Obviously, we spend a lot of time maintaining the machines and trying to reverse the inevitable drift as people make changes to the computers. We would like to control the integrity of the build we initially put onto the machines without handcuffing users and especially without using Mac's Parental Control software. (We've had nothing but bad experiences with it.) We've been testing DeepFreeze, and so far it's very impressive. But I'm curious to hear if people who have used DeepFreeze or any similar software have any advice or tips. To get things started, I will post my own pros and cons. Pros: The state of the machine is frozen in our chosen state. All changes made to the machine after that disappear upon restart. (This frozen state really appears to cover everything. I have yet to do something to a test machine that isn't instantly healed.) Tons of trivial but time-consuming maintenance is gone in an instant. Also, lots of not-so-trivial breakage should be avoided. There are good options, however, that allow you to create storage spaces either globally or per user. (Otherwise, stored files disappear upon reboot. For some machines, this is a good option itself. Simply warn people: save externally or else; this machine is a kiosk, not your storage space.) Cons: Anytime we actually need to make a change (upgrade basic software, add a printer or an airport permanently, add new software), the process is a bit more complex. Reboot into a special mode (thaw state), make changes, reboot back into frozen mode. If (when?) we forget this, we will end up making changes that disappear after the next reboot. Users will forget to save files correctly (in the right place or externally), and we will have loud, unpleasant conversations explaining that we can't recover the document they worked on all afternoon yesterday. The machine rebooted. The file is gone. These are my initial thoughts, but I would love to hear from other people who have experience with DeepFreeze or any similar software. What should we be careful about? Do the pros outweigh the cons? What gains or problems am I not seeing? Thanks.

    Read the article

  • Educate me - should I buy these prebuilt NAS (which is better) or make my own?

    - by user29336
    I'm trying to learn as much as possible, and I think I've learned quite a bit so bear with me here under my confusion. I found a coupe NAS setups. I'm not sure if one is better than the other, other than the price being higher on some, and some coming with drives VS not. Let me list my setup so you can get an idea of what I want to provide: Macbook Pro Macbook Mini for Media streaming (so far) Windows 7 Gaming Computer Xbox 360 I'd like to provide a storage system for all these devices so they can access files very easily, I'd also like any of these devices to be able to stream media from this storage system. I'd like this storage system to be hassle free in terms of my confidence in the data integrity. If a drive fails, I want to know that I can replace the drive and all my files will still exist. I'd like to access this storage system OUTSIDE of my LAN. If I'm out on a job for work I'd like to go in, or be able to have people DL some files. This brings me to a question, is this what iSCSI is? I'd like this data system to be able to download torrents. I want to mount any drive on this storage system onto my OSX laptop as if it were a local drive attached. (Is this with iSCSI is?) I'd like this system to have a GOOD web based GUI. I don't want to install software to use it. I believe those are the most of my requirements. If I'm missing something that I have no knowledge about, can someone educate me? Here are the systems I found: $729ish on Newegg Lacie 5Big Network 2 (comes with 5TB of space. iSCSI / mac compatible, torrents, nice ui, + others?) Is this overpriced for what it provides? It almost seems like a great deal to me because of the 5TB of space it comes with vs the other NAS systems that don't come with storage but cost $600-700. Should I get a different NAS system? Netgear? Others? Do they have same features? Better? Is it better to buy your own disks? What about making my own? I'm tech savy all around. It seems cheaper to buy a premade one especially with the support/warranty it provides...

    Read the article

  • What happens to missed writes after a zpool clear?

    - by Kevin
    I am trying to understand ZFS' behaviour under a specific condition, but the documentation is not very explicit about this so I'm left guessing. Suppose we have a zpool with redundancy. Take the following sequence of events: A problem arises in the connection between device D and the server. This causes a large number of failures and ZFS therefore faults the device, putting the pool in degraded state. While the pool is in degraded state, the pool is mutated (data is written and/or changed.) The connectivity issue is physically repaired such that device D is reliable again. Knowing that most data on D is valid, and not wanting to stress the pool with a resilver needlessly, the admin instead runs zpool clear pool D. This is indicated by Oracle's documentation as the appropriate action where the fault was due to a transient problem that has been corrected. I've read that zpool clear only clears the error counter, and restores the device to online status. However, this is a bit troubling, because if that's all it does, it will leave the pool in an inconsistent state! This is because mutations in step 2 will not have been successfully written to D. Instead, D will reflect the state of the pool prior to the connectivity failure. This is of course not the normative state for a zpool and could lead to hard data loss upon failure of another device - however, the pool status will not reflect this issue! I would at least assume based on ZFS' robust integrity mechanisms that an attempt to read the mutated data from D would catch the mistakes and repair them. However, this raises two problems: Reads are not guaranteed to hit all mutations unless a scrub is done; and Once ZFS does hit the mutated data, it (I'm guessing) might fault the drive again because it would appear to ZFS to be corrupting data, since it doesn't remember the previous write failures. Theoretically, ZFS could circumvent this problem by keeping track of mutations that occur during a degraded state, and writing them back to D when it's cleared. For some reason I suspect that's not what happens, though. I'm hoping someone with intimate knowledge of ZFS can shed some light on this aspect.

    Read the article

  • New MySQL Cluster 7.3 Previews: Foreign Keys, NoSQL Node.js API and Auto-Tuned Clusters

    - by Mat Keep
    At this weeks MySQL Connect conference, Oracle previewed an exciting new wave of developments for MySQL Cluster, further extending its simplicity and flexibility by expanding the range of use-cases, adding new NoSQL options, and automating configuration. What’s new: Development Release 1: MySQL Cluster 7.3 with Foreign Keys Early Access “Labs” Preview: MySQL Cluster NoSQL API for Node.js Early Access “Labs” Preview: MySQL Cluster GUI-Based Auto-Installer In this blog, I'll introduce you to the features being previewed. Review the blogs listed below for more detail on each of the specific features discussed. Save the date!: A live webinar is scheduled for Thursday 25th October at 0900 Pacific Time / 1600UTC where we will discuss each of these enhancements in more detail. Registration will be open soon and published to the MySQL webinars page MySQL Cluster 7.3: Development Release 1 The first MySQL Cluster 7.3 Development Milestone Release (DMR) previews Foreign Keys, bringing powerful new functionality to MySQL Cluster while eliminating development complexity. Foreign Key support has been one of the most requested enhancements to MySQL Cluster – enabling users to simplify their data models and application logic – while extending the range of use-cases for both custom projects requiring referential integrity and packaged applications, such as eCommerce, CRM, CMS, etc. Implementation The Foreign Key functionality is implemented directly within the MySQL Cluster data nodes, allowing any client API accessing the cluster to benefit from them – whether they are SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA, HTTP/REST or the new Node.js API - discussed later.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation.  This represents a further enhancement to MySQL Cluster's support for on0line schema changes, ie adding and dropping indexes, adding columns, etc.  Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  Getting Started with MySQL Cluster 7.3 DMR1: Users can download either the source or binary and evaluate the MySQL Cluster 7.3 DMR with Foreign Keys now! (Select the Development Release tab). MySQL Cluster NoSQL API for Node.js Node.js is hot! In a little over 3 years, it has become one of the most popular environments for developing next generation web, cloud, mobile and social applications. Bringing JavaScript from the browser to the server, the design goal of Node.js is to build new real-time applications supporting millions of client connections, serviced by a single CPU core. Making it simple to further extend the flexibility and power of Node.js to the database layer, we are previewing the Node.js Javascript API for MySQL Cluster as an Early Access release, available for download now from http://labs.mysql.com/. Select the following build: MySQL-Cluster-NoSQL-Connector-for-Node-js Alternatively, you can clone the project at the MySQL GitHub page.  Implemented as a module for the V8 engine, the new API provides Node.js with a native, asynchronous JavaScript interface that can be used to both query and receive results sets directly from MySQL Cluster, without transformations to SQL. Figure 1: MySQL Cluster NoSQL API for Node.js enables end-to-end JavaScript development Rather than just presenting a simple interface to the database, the Node.js module integrates the MySQL Cluster native API library directly within the web application itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The new Node.js API joins a rich array of NoSQL interfaces available for MySQL Cluster. Whichever API is chosen for an application, SQL and NoSQL can be used concurrently across the same data set, providing the ultimate in developer flexibility.  Get started with MySQL Cluster NoSQL API for Node.js tutorial MySQL Cluster GUI-Based Auto-Installer Compatible with both MySQL Cluster 7.2 and 7.3, the Auto-Installer makes it simple for DevOps teams to quickly configure and provision highly optimized MySQL Cluster deployments – whether on-premise or in the cloud. Implemented with a standard HTML GUI and Python-based web server back-end, the Auto-Installer intelligently configures MySQL Cluster based on application requirements and auto-discovered hardware resources Figure 2: Automated Tuning and Configuration of MySQL Cluster Developed by the same engineering team responsible for the MySQL Cluster database, the installer provides standardized configurations that make it simple, quick and easy to build stable and high performance clustered environments. The auto-installer is previewed as an Early Access release, available for download now from http://labs.mysql.com/, by selecting the MySQL-Cluster-Auto-Installer build. You can read more about getting started with the MySQL Cluster auto-installer here. Watch the YouTube video for a demonstration of using the MySQL Cluster auto-installer Getting Started with MySQL Cluster If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3 and the Early Access previews). Or use the new MySQL Cluster Auto-Installer! Download the Guide to Scaling Web Databases with MySQL Cluster (to learn more about its architecture, design and ideal use-cases). Post any questions to the MySQL Cluster forum where our Engineering team and the MySQL Cluster community will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section here or in the blogs referenced in this article. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. The first Development Release of MySQL Cluster 7.3 and the Early Access previews give you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases, by using the comments of this blog.

    Read the article

  • Interview with Tomas Ulin at the MySQL Innovation Day

    - by Monica Kumar
    MySQL Innovation Day held on June 5, 2012 was a great event for the MySQL engineers, users and customers to gather, share and network. I was able to get a few minutes with Tomas Ulin, Vice President of MySQL Engineering at Oracle, to ask him some questions. Here are the highlights of my interview with Tomas. Monica: This was the first MySQL Innovation Day, correct?  Why now, what was the strategy behind hosting this kind of event? Tomas: In the last year, we have rolled out an incredible number of MySQL events worldwide – some targeted at developers that are new to MySQL and others for the MySQL savvy. At the MySQL Innovation Day, our first event of this kind,, we had a number of our key engineers presenting lightning talks delivering previews of key new features as well as discussing roadmap. Our goal is to keep an open dialogue with the MySQL community. In fact, we are hosting a two-day conference, another first, for the MySQL community called MySQL Connect on Sept. 29-30 in San Francisco. If you attended the MySQL Innovation Day and liked what we did, you are going to love MySQL Connect. We’ll have a lot more of our engineers and many users and community members presenting hour long sessions and hands on labs. Our engineers will be presenting new MySQL features as well offer previews of upcoming enhancements. Monica: What's the big take-away from today's MySQL Innovation Day? Tomas: I hope the most important takeaway for attendees was to see that Oracle has been driving, and continues to drive MySQL innovation with a steady stream of new great GA and Development Milestone releases. Monica: What were attendees most interested in? What feedback did they have? Tomas: Feedback from attendees was incredibly positive and encouraging. In particular, they liked the interaction with the MySQL engineers and were also excited about the new early access features in MySQL 5.6 and MySQL Cluster 7.3. In addition, sessions delivered by MySQL users like Facebook, Pinterest and Twitter were very well received. For example, Pinterest talked about using MySQL to scale from 0 to billions of page views/month, Twitter talked about “Scaling twitter with MySQL” and Facebook discussed the many options to implement MySQL master failover solutions. The presentations are already available for download while some of the session videos will be made available on the MySQL Innovation Day web page shortly. Monica: How would you distinguish the use of MySQL vs. Oracle Database? What key factors should customers consider? Tomas: MySQL and Oracle Database complement each other. They are very different products, best suited to different use cases. Customers can choose world-class solutions from Oracle to fulfill a variety of needs. MySQL is a great choice for enterprise web-based, custom and embedded apps. Oracle Database is the leading choice for enterprise packaged applications such as ERP, CRM as well as high-end data warehousing and business intelligence applications. Monica: What are the highlights of the current MySQL 5.6 Development Milestone Release and early access features for MySQL Cluster 7.3? Tomas: MySQL 5.6 development milestone release builds on MySQL 5.5 by improving: Optimizer for better Performance, Scalability Performance Schema for better instrumentation InnoDB for better transactional throughput Replication for higher availability, data integrity NoSQL options for more flexibility We announced some new early access features in MySQL 5.6, including binary log group commit. We also announced early access features in MySQL Cluster 7.3 including support for foreign key constraints. Monica: How do people get these releases? Tomas: You can access development milestone releases by going to: http://dev.mysql.com/downloads/mysqlThen select the “Development Release” tab. The MySQL Cluster 7.3 and other early access features can be downloaded at: http://labs.mysql.com Monica: What's coming up next for MySQL? Tomas: Our development team is working in overdrive, cranking out new features with community feedback. Don’t miss the MySQL Connect conference being held in San Francisco on Sept. 29 and 30th. My team and I will be there. I hope you can join us! Monica: Thank you for your time, Tomas. I look forward to seeing you at the MySQL Connect conference. To our followers, I hope you found this interview informative. I welcome your comments. Please stay tuned here for more updates on MySQL. Note: Monica Kumar is Senior Director of product marketing for Linux, Virtualization and MySQL at Oracle.

    Read the article

  • SQL SERVER – LCK_M_XXX – Wait Type – Day 15 of 28

    - by pinaldave
    Locking is a mechanism used by the SQL Server Database Engine to synchronize access by multiple users to the same piece of data, at the same time. In simpler words, it maintains the integrity of data by protecting (or preventing) access to the database object. From Book On-Line: LCK_M_BU Occurs when a task is waiting to acquire a Bulk Update (BU) lock. LCK_M_IS Occurs when a task is waiting to acquire an Intent Shared (IS) lock. LCK_M_IU Occurs when a task is waiting to acquire an Intent Update (IU) lock. LCK_M_IX Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock. LCK_M_S Occurs when a task is waiting to acquire a Shared lock. LCK_M_SCH_M Occurs when a task is waiting to acquire a Schema Modify lock. LCK_M_SCH_S Occurs when a task is waiting to acquire a Schema Share lock. LCK_M_SIU Occurs when a task is waiting to acquire a Shared With Intent Update lock. LCK_M_SIX Occurs when a task is waiting to acquire a Shared With Intent Exclusive lock. LCK_M_U Occurs when a task is waiting to acquire an Update lock. LCK_M_UIX Occurs when a task is waiting to acquire an Update With Intent Exclusive lock. LCK_M_X Occurs when a task is waiting to acquire an Exclusive lock. LCK_M_XXX Explanation: I think the explanation of this wait type is the simplest. When any task is waiting to acquire lock on any resource, this particular wait type occurs. The common reason for the task to be waiting to put lock on the resource is that the resource is already locked and some other operations may be going on within it. This wait also indicates that resources are not available or are occupied at the moment due to some reasons. There is a good chance that the waiting queries start to time out if this wait type is very high. Client application may degrade the performance as well. You can use various methods to find blocking queries: EXEC sp_who2 SQL SERVER – Quickest Way to Identify Blocking Query and Resolution – Dirty Solution DMV – sys.dm_tran_locks DMV – sys.dm_os_waiting_tasks Reducing LCK_M_XXX wait: Check the Explicit Transactions. If transactions are very long, this wait type can start building up because of other waiting transactions. Keep the transactions small. Serialization Isolation can build up this wait type. If that is an acceptable isolation for your business, this wait type may be natural. The default isolation of SQL Server is ‘Read Committed’. One of my clients has changed their isolation to “Read Uncommitted”. I strongly discourage the use of this because this will probably lead to having lots of dirty data in the database. Identify blocking queries mentioned using various methods described above, and then optimize them. Partition can be one of the options to consider because this will allow transactions to execute concurrently on different partitions. If there are runaway queries, use timeout. (Please discuss this solution with your database architect first as timeout can work against you). Check if there is no memory and IO-related issue using the following counters: Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2

    - by rajbk
    In the previous post, you saw how to create an OData feed and pre-filter the data. In this post, we will see how to shape the data. A sample project is attached at the bottom of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1 Shaping the feed The Product feed we created earlier returns too much information about our products. Let’s change this so that only the following properties are returned – ProductID, ProductName, QuantityPerUnit, UnitPrice, UnitsInStock. We also want to return only Products that are not discontinued.  Splitting the Entity To shape our data according to the requirements above, we are going to split our Product Entity into two and expose one through the feed. The exposed entity will contain only the properties listed above. We will use the other Entity in our Query Interceptor to pre-filter the data so that discontinued products are not returned. Go to the design surface for the Entity Model and make a copy of the Product entity. A “Product1” Entity gets created.   Rename Product1 to ProductDetail. Right click on the Product entity and select “Add Association” Make a one to one association between Product and ProductDetails.   Keep only the properties we wish to expose on the Product entity and delete all other properties on it (see diagram below). You delete a property on an Entity by right clicking on the property and selecting “delete”. Keep the ProductID on the ProductDetail. Delete any other property on the ProductDetail entity that is already present in the Product entity. Your design surface should look like below:    Mapping Entity to Database Tables Right click on “ProductDetail” and go to “Table Mapping”   Add a mapping to the “Products” table in the Mapping Details.   After mapping ProductDetail, you should see the following.   Add a referential constraint. Lets add a referential constraint which is similar to a referential integrity constraint in SQL. Double click on the Association between the Entities and add the constraint with “Principal” set to “Product”. Let us review what we did so far. We made a copy of the Product entity and called it ProductDetail We created a one to one association between these entities Excluding the ProductID, we made sure properties were not duplicated between these entities  We added a ProductDetail entity to Products table mapping (Entity to Database). We added a referential constraint between the entities. Lets build our project. We get the following error: ”'NortwindODataFeed.Product' does not contain a definition for 'Discontinued' and no extension method 'Discontinued' accepting a first argument of type 'NortwindODataFeed.Product' could be found …" The reason for this error is because our Product Entity no longer has a “Discontinued” property. We “moved” it to the ProductDetail entity since we want our Product Entity to contain only properties that will be exposed by our feed. Since we have a one to one association between the entities, we can easily rewrite our Query Interceptor like so: [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.ProductDetail.Discontinued == false; } Similarly, all “hidden” properties of the Product table are available to us internally (through the ProductDetail Entity) for any additional logic we wish to implement. Compile the project and view the feed. We see that the feed returns only the properties that were part of the requirement.   To see the data in JSON format, you have to create a request with the following request header Accept: application/json, text/javascript, */* (easy to do in jQuery) The result should look like this: { "d" : { "results": [ { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(1)", "type": "NorthwindModel.Product" }, "ProductID": 1, "ProductName": "Chai", "QuantityPerUnit": "10 boxes x 20 bags", "UnitPrice": "18.0000", "UnitsInStock": 39 }, { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(2)", "type": "NorthwindModel.Product" }, "ProductID": 2, "ProductName": "Chang", "QuantityPerUnit": "24 - 12 oz bottles", "UnitPrice": "19.0000", "UnitsInStock": 17 }, { ... ... If anyone has the $format operation working, please post a comment. It was not working for me at the time of writing this.  We have successfully pre-filtered our data to expose only products that have not been discontinued and shaped our data so that only certain properties of the Entity are exposed. Note that there are several other ways you could implement this like creating a QueryView, Stored Procedure or DefiningQuery. You have seen how easy it is to create an OData feed, shape the data and pre-filter it by hardly writing any code of your own. For more details on OData, Google it with your favorite search engine :-) Also check out the one of the most passionate persons I have ever met, Pablo Castro – the Architect of Aristoria WCF Data Services. Watch his MIX 2010 presentation titled “OData: There's a Feed for That” here. Download Sample Project for VS 2010 RTM NortwindODataFeed.zip

    Read the article

  • Big Data – Buzz Words: Importance of Relational Database in Big Data World – Day 9 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is HDFS. In this article we will take a quick look at the importance of the Relational Database in Big Data world. A Big Question? Here are a few questions I often received since the beginning of the Big Data Series - Does the relational database have no space in the story of the Big Data? Does relational database is no longer relevant as Big Data is evolving? Is relational database not capable to handle Big Data? Is it true that one no longer has to learn about relational data if Big Data is the final destination? Well, every single time when I hear that one person wants to learn about Big Data and is no longer interested in learning about relational database, I find it as a bit far stretched. I am not here to give ambiguous answers of It Depends. I am personally very clear that one who is aspiring to become Big Data Scientist or Big Data Expert they should learn about relational database. NoSQL Movement The reason for the NoSQL Movement in recent time was because of the two important advantages of the NoSQL databases. Performance Flexible Schema In personal experience I have found that when I use NoSQL I have found both of the above listed advantages when I use NoSQL database. There are instances when I found relational database too much restrictive when my data is unstructured as well as they have in the datatype which my Relational Database does not support. It is the same case when I have found that NoSQL solution performing much better than relational databases. I must say that I am a big fan of NoSQL solutions in the recent times but I have also seen occasions and situations where relational database is still perfect fit even though the database is growing increasingly as well have all the symptoms of the big data. Situations in Relational Database Outperforms Adhoc reporting is the one of the most common scenarios where NoSQL is does not have optimal solution. For example reporting queries often needs to aggregate based on the columns which are not indexed as well are built while the report is running, in this kind of scenario NoSQL databases (document database stores, distributed key value stores) database often does not perform well. In the case of the ad-hoc reporting I have often found it is much easier to work with relational databases. SQL is the most popular computer language of all the time. I have been using it for almost over 10 years and I feel that I will be using it for a long time in future. There are plenty of the tools, connectors and awareness of the SQL language in the industry. Pretty much every programming language has a written drivers for the SQL language and most of the developers have learned this language during their school/college time. In many cases, writing query based on SQL is much easier than writing queries in NoSQL supported languages. I believe this is the current situation but in the future this situation can reverse when No SQL query languages are equally popular. ACID (Atomicity Consistency Isolation Durability) – Not all the NoSQL solutions offers ACID compliant language. There are always situations (for example banking transactions, eCommerce shopping carts etc.) where if there is no ACID the operations can be invalid as well database integrity can be at risk. Even though the data volume indeed qualify as a Big Data there are always operations in the application which absolutely needs ACID compliance matured language. The Mixed Bag I have often heard argument that all the big social media sites now a days have moved away from Relational Database. Actually this is not entirely true. While researching about Big Data and Relational Database, I have found that many of the popular social media sites uses Big Data solutions along with Relational Database. Many are using relational databases to deliver the results to end user on the run time and many still uses a relational database as their major backbone. Here are a few examples: Facebook uses MySQL to display the timeline. (Reference Link) Twitter uses MySQL. (Reference Link) Tumblr uses Sharded MySQL (Reference Link) Wikipedia uses MySQL for data storage. (Reference Link) There are many for prominent organizations which are running large scale applications uses relational database along with various Big Data frameworks to satisfy their various business needs. Summary I believe that RDBMS is like a vanilla ice cream. Everybody loves it and everybody has it. NoSQL and other solutions are like chocolate ice cream or custom ice cream – there is a huge base which loves them and wants them but not every ice cream maker can make it just right  for everyone’s taste. No matter how fancy an ice cream store is there is always plain vanilla ice cream available there. Just like the same, there are always cases and situations in the Big Data’s story where traditional relational database is the part of the whole story. In the real world scenarios there will be always the case when there will be need of the relational database concepts and its ideology. It is extremely important to accept relational database as one of the key components of the Big Data instead of treating it as a substandard technology. Ray of Hope – NewSQL In this module we discussed that there are places where we need ACID compliance from our Big Data application and NoSQL will not support that out of box. There is a new termed coined for the application/tool which supports most of the properties of the traditional RDBMS and supports Big Data infrastructure – NewSQL. Tomorrow In tomorrow’s blog post we will discuss about NewSQL. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • SQL SERVER – PAGELATCH_DT, PAGELATCH_EX, PAGELATCH_KP, PAGELATCH_SH, PAGELATCH_UP – Wait Type – Day 12 of 28

    - by pinaldave
    This is another common wait type. However, I still frequently see people getting confused with PAGEIOLATCH_X and PAGELATCH_X wait types. Actually, there is a big difference between the two. PAGEIOLATCH is related to IO issues, while PAGELATCH is not related to IO issues but is oftentimes linked to a buffer issue. Before we delve deeper in this interesting topic, first let us understand what Latch is. Latches are internal SQL Server locks which can be described as very lightweight and short-term synchronization objects. Latches are not primarily to protect pages being read from disk into memory. It’s a synchronization object for any in-memory access to any portion of a log or data file.[Updated based on comment of Paul Randal] The difference between locks and latches is that locks seal all the involved resources throughout the duration of the transactions (and other processes will have no access to the object), whereas latches locks the resources during the time when the data is changed. This way, a latch is able to maintain the integrity of the data between storage engine and data cache. A latch is a short-living lock that is put on resources on buffer cache and in the physical disk when data is moved in either directions. As soon as the data is moved, the latch is released. Now, let us understand the wait stat type  related to latches. From Book On-Line: PAGELATCH_DT Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Destroy mode. PAGELATCH_EX Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Exclusive mode. PAGELATCH_KP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Keep mode. PAGELATCH_SH Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Shared mode. PAGELATCH_UP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Update mode. PAGELATCH_X Explanation: When there is a contention of access of the in-memory pages, this wait type shows up. It is quite possible that some of the pages in the memory are of very high demand. For the SQL Server to access them and put a latch on the pages, it will have to wait. This wait type is usually created at the same time. Additionally, it is commonly visible when the TempDB has higher contention as well. If there are indexes that are heavily used, contention can be created as well, leading to this wait type. Reducing PAGELATCH_X wait: The following counters are useful to understand the status of the PAGELATCH: Average Latch Wait Time (ms): The wait time for latch requests that have to wait. Latch Waits/sec: This is the number of latch requests that could not be granted immediately. Total Latch Wait Time (ms): This is the total latch wait time for latch requests in the last second. If there is TempDB contention, I suggest that you read the blog post of Robert Davis right away. He has written an excellent blog post regarding how to find out TempDB contention. The same blog post explains the terms in the allocation of GAM, SGAM and PFS. If there was a TempDB contention, Paul Randal explains the optimal settings for the TempDB in his misconceptions series. Trace Flag 1118 can be useful but use it very carefully. I totally understand that this blog post is not as clear as my other blog posts. I suggest if this wait stats is on one of your higher wait type. Do leave a comment or send me an email and I will get back to you with my solution for your situation. May the looking at all other wait stats and types together become effective as this wait type can help suggest proper bottleneck in your system. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • The Top Ten Security Top Ten Lists

    - by Troy Kitch
    As a marketer, we're always putting together the top 3, or 5 best, or an assortment of top ten lists. So instead of going that route, I've put together my top ten security top ten lists. These are not only for security practitioners, but also for the average Joe/Jane; because who isn't concerned about security these days? Now, there might not be ten for each one of these lists, but the title works best that way. Starting with my number ten (in no particular order): 10. Top 10 Most Influential Security-Related Movies Amrit Williams pulls together a great collection of security-related movies. He asks for comments on which one made you want to get into the business. I would have to say that my most influential movie(s), that made me want to get into the business of "stopping the bad guys" would have to be the James Bond series. I grew up on James Bond movies: thwarting the bad guy and saving the world. I recall being both ecstatic and worried when Silicon Valley-themed "A View to A Kill" hit theaters: "An investigation of a horse-racing scam leads 007 to a mad industrialist who plans to create a worldwide microchip monopoly by destroying California's Silicon Valley." Yikes! 9. Top Ten Security Careers From movies that got you into the career, here’s a top 10 list of security-related careers. It starts with number then, Information Security Analyst and ends with number one, Malware Analyst. They point out the significant growth in security careers and indicate that "according to the Bureau of Labor Statistics, the field is expected to experience growth rates of 22% between 2010-2020. If you are interested in getting into the field, Oracle has many great opportunities all around the world.  8. Top 125 Network Security Tools A bit outside of the range of 10, the top 125 Network Security Tools is an important list because it includes a prioritized list of key security tools practitioners are using in the hacking community, regardless of whether they are vendor supplied or open source. The exhaustive list provides ratings, reviews, searching, and sorting. 7. Top 10 Security Practices I have to give a shout out to my alma mater, Cal Poly, SLO: Go Mustangs! They have compiled their list of top 10 practices for students and faculty to follow. Educational institutions are a common target of web based attacks and miscellaneous errors according to the 2014 Verizon Data Breach Investigations Report.    6. (ISC)2 Top 10 Safe and Secure Online Tips for Parents This list is arguably the most important list on my list. The tips were "gathered from (ISC)2 member volunteers who participate in the organization’s Safe and Secure Online program, a worldwide initiative that brings top cyber security experts into schools to teach children ages 11-14 how to protect themselves in a cyber-connected world…If you are a parent, educator or organization that would like the Safe and Secure Online presentation delivered at your local school, or would like more information about the program, please visit here.” 5. Top Ten Data Breaches of the Past 12 Months This type of list is always changing, so it's nice to have a current one here from Techrader.com. They've compiled and commented on the top breaches. It is likely that most readers here were effected in some way or another. 4. Top Ten Security Comic Books Although mostly physical security controls, I threw this one in for fun. My vote for #1 (not on the list) would be Professor X. The guy can breach confidentiality, integrity, and availability just by messing with your thoughts. 3. The IOUG Data Security Survey's Top 10+ Threats to Organizations The Independent Oracle Users Group annual survey on enterprise data security, Leaders Vs. Laggards, highlights what Oracle Database users deem as the top 12 threats to their organization. You can find a nice graph on page 9; Figure 7: Greatest Threats to Data Security. 2. The Ten Most Common Database Security Vulnerabilities Though I don't necessarily agree with all of the vulnerabilities in this order...I like a list that focuses on where two-thirds of your sensitive and regulated data resides (Source: IDC).  1. OWASP Top Ten Project The Online Web Application Security Project puts together their annual list of the 10 most critical web application security risks that organizations should be including in their overall security, business risk and compliance plans. In particular, SQL injection risks continues to rear its ugly head each year. Oracle Audit Vault and Database Firewall can help prevent SQL injection attacks and monitor database and system activity as a detective security control. Did I miss any?

    Read the article

  • Open Source but not Free Software (or vice versa)

    - by TRiG
    The definition of "Free Software" from the Free Software Foundation: “Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.” Free software is a matter of the users' freedom to run, copy, distribute, study, change and improve the software. More precisely, it means that the program's users have the four essential freedoms: The freedom to run the program, for any purpose (freedom 0). The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this. The freedom to redistribute copies so you can help your neighbor (freedom 2). The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. A program is free software if users have all of these freedoms. Thus, you should be free to redistribute copies, either with or without modifications, either gratis or charging a fee for distribution, to anyone anywhere. Being free to do these things means (among other things) that you do not have to ask or pay for permission to do so. The definition of "Open Source Software" from the Open Source Initiative: Open source doesn't just mean access to the source code. The distribution terms of open-source software must comply with the following criteria: Free Redistribution The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale. Source Code The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost preferably, downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed. Derived Works The license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software. Integrity of The Author's Source Code The license may restrict source-code from being distributed in modified form only if the license allows the distribution of "patch files" with the source code for the purpose of modifying the program at build time. The license must explicitly permit distribution of software built from modified source code. The license may require derived works to carry a different name or version number from the original software. No Discrimination Against Persons or Groups The license must not discriminate against any person or group of persons. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research. Distribution of License The rights attached to the program must apply to all to whom the program is redistributed without the need for execution of an additional license by those parties. License Must Not Be Specific to a Product The rights attached to the program must not depend on the program's being part of a particular software distribution. If the program is extracted from that distribution and used or distributed within the terms of the program's license, all parties to whom the program is redistributed should have the same rights as those that are granted in conjunction with the original software distribution. License Must Not Restrict Other Software The license must not place restrictions on other software that is distributed along with the licensed software. For example, the license must not insist that all other programs distributed on the same medium must be open-source software. License Must Be Technology-Neutral No provision of the license may be predicated on any individual technology or style of interface. These definitions, although they derive from very different ideologies, are broadly compatible, and most Free Software is also Open Source Software and vice versa. I believe, however, that it is possible for this not to be the case: It is possible for software to be Open Source without being Free, or to be Free without being Open Source. Questions Is my belief correct? Is it possible for software to fall into one camp and not the other? Does any such software actually exist? Please give examples. Clarification I've already accepted an answer now, but I seem to have confused a lot of people, so perhaps a clarification is in order. I was not asking about the difference between copyleft (or "viral", though I don't like that term) and non-copyleft ("permissive") licenses. Nor was I asking about your personal idiosyncratic definitions of "Free" and "Open". I was asking about "Free Software as defined by the FSF" and "Open Source Software as defined by the OSI". Are the two always the same? Is it possible to be one without being the other? And the answer, it seems, is that it's impossible to be Free without being Open, but possible to be Open without being Free. Thank you everyone who actually answered the question.

    Read the article

  • Big Data – Role of Cloud Computing in Big Data – Day 11 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the NewSQL. In this article we will understand the role of Cloud in Big Data Story What is Cloud? Cloud is the biggest buzzword around from last few years. Everyone knows about the Cloud and it is extremely well defined online. In this article we will discuss cloud in the context of the Big Data. Cloud computing is a method of providing a shared computing resources to the application which requires dynamic resources. These resources include applications, computing, storage, networking, development and various deployment platforms. The fundamentals of the cloud computing are that it shares pretty much share all the resources and deliver to end users as a service.  Examples of the Cloud Computing and Big Data are Google and Amazon.com. Both have fantastic Big Data offering with the help of the cloud. We will discuss this later in this blog post. There are two different Cloud Deployment Models: 1) The Public Cloud and 2) The Private Cloud Public Cloud Public Cloud is the cloud infrastructure build by commercial providers (Amazon, Rackspace etc.) creates a highly scalable data center that hides the complex infrastructure from the consumer and provides various services. Private Cloud Private Cloud is the cloud infrastructure build by a single organization where they are managing highly scalable data center internally. Here is the quick comparison between Public Cloud and Private Cloud from Wikipedia:   Public Cloud Private Cloud Initial cost Typically zero Typically high Running cost Unpredictable Unpredictable Customization Impossible Possible Privacy No (Host has access to the data Yes Single sign-on Impossible Possible Scaling up Easy while within defined limits Laborious but no limits Hybrid Cloud Hybrid Cloud is the cloud infrastructure build with the composition of two or more clouds like public and private cloud. Hybrid cloud gives best of the both the world as it combines multiple cloud deployment models together. Cloud and Big Data – Common Characteristics There are many characteristics of the Cloud Architecture and Cloud Computing which are also essentially important for Big Data as well. They highly overlap and at many places it just makes sense to use the power of both the architecture and build a highly scalable framework. Here is the list of all the characteristics of cloud computing important in Big Data Scalability Elasticity Ad-hoc Resource Pooling Low Cost to Setup Infastructure Pay on Use or Pay as you Go Highly Available Leading Big Data Cloud Providers There are many players in Big Data Cloud but we will list a few of the known players in this list. Amazon Amazon is arguably the most popular Infrastructure as a Service (IaaS) provider. The history of how Amazon started in this business is very interesting. They started out with a massive infrastructure to support their own business. Gradually they figured out that their own resources are underutilized most of the time. They decided to get the maximum out of the resources they have and hence  they launched their Amazon Elastic Compute Cloud (Amazon EC2) service in 2006. Their products have evolved a lot recently and now it is one of their primary business besides their retail selling. Amazon also offers Big Data services understand Amazon Web Services. Here is the list of the included services: Amazon Elastic MapReduce – It processes very high volumes of data Amazon DynammoDB – It is fully managed NoSQL (Not Only SQL) database service Amazon Simple Storage Services (S3) – A web-scale service designed to store and accommodate any amount of data Amazon High Performance Computing – It provides low-tenancy tuned high performance computing cluster Amazon RedShift – It is petabyte scale data warehousing service Google Though Google is known for Search Engine, we all know that it is much more than that. Google Compute Engine – It offers secure, flexible computing from energy efficient data centers Google Big Query – It allows SQL-like queries to run against large datasets Google Prediction API – It is a cloud based machine learning tool Other Players Besides Amazon and Google we also have other players in the Big Data market as well. Microsoft is also attempting Big Data with the Cloud with Microsoft Azure. Additionally Rackspace and NASA together have initiated OpenStack. The goal of Openstack is to provide a massively scaled, multitenant cloud that can run on any hardware. Thing to Watch The cloud based solutions provides a great integration with the Big Data’s story as well it is very economical to implement as well. However, there are few things one should be very careful when deploying Big Data on cloud solutions. Here is a list of a few things to watch: Data Integrity Initial Cost Recurring Cost Performance Data Access Security Location Compliance Every company have different approaches to Big Data and have different rules and regulations. Based on various factors, one can implement their own custom Big Data solution on a cloud. Tomorrow In tomorrow’s blog post we will discuss about various Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24  | Next Page >