Search Results

Search found 2190 results on 88 pages for 'pg dump'.

Page 14/88 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Evaluating Solutions to Manage Product Compliance? Don't Wait Much Longer

    - by Kerrie Foy
    Depending on severity, product compliance issues can cause all sorts of problems from run-away budgets to business closures. But effective policies and safeguards can create a strong foundation for innovation, productivity, market penetration and competitive advantage. If you’ve been putting off a systematic approach to product compliance, it is time to reconsider that decision, or indecision. Why now?  No matter what industry, companies face a litany of worldwide and regional regulations that require proof of product compliance and environmental friendliness for market access.  For example, Restriction of Hazardous Substances (RoHS) is a regulation that restricts the use of six dangerous materials used in the manufacture of electronic and electrical equipment.  ROHS was originally adopted by the European Union in 2003 for implementation in 2006, and it has evolved over time through various regional versions for North America, China, Japan, Korea, Norway and Turkey.  In addition, the RoHS directive allowed for material exemptions used in Medical Devices, but that exemption ends in 2014.   Additional regulations worth watching are the Battery Directive, Waste Electrical and Electronic Equipment (WEEE), and Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) directives.  Additional evolving regulations are coming from governing bodies like the Food and Drug Administration (FDA) and the International Organization for Standardization (ISO). Corporate sustainability initiatives are also gaining urgency and influencing product design. In a survey of 405 corporations in the Global 500 by Carbon Disclosure Project, co-written by PwC (CDP Global 500 Climate Change Report 2012 entitled Business Resilience in an Uncertain, Resource-Constrained World), 48% of the respondents indicated they saw potential to create new products and business services as a response to climate change. Just 21% reported a dedicated budget for the research. However, the report goes on to explain that those few companies are winning over new customers and driving additional profits by exploiting their abilities to adapt to environmental needs. The article cites Dell as an example – Dell has invested in research to develop new products designed to reduce its customers’ emissions by more than 10 million metric tons of CO2e per year. This reduction in emissions should save Dell’s customers over $1billion per year as a result! Over time we expect to see many additional companies prove that eco-design provides marketplace benefits through differentiation and direct customer value. How do you meet compliance requirements and also successfully invest in eco-friendly designs? No doubt companies struggle to answer this question. After all, the journey to get there may involve transforming business models, go-to-market strategies, supply networks, quality assurance policies and compliance processes per the rapidly evolving global and regional directives. There may be limited executive focus on the initiative, inability to quantify noncompliance, or not enough resources to justify investment. To make things even more difficult to address, compliance responsibility can be a passionate topic within an organization, making the prospect of change on an enterprise scale problematic and time-consuming. Without a single source of truth for product data and without proper processes in place, ensuring product compliance burgeons into a crushing task that is cost-prohibitive and overwhelming to an organization. With all the overhead, certain markets or demographics become simply inaccessible. Therefore, the risk to consumer goodwill and satisfaction, revenue, business continuity, and market potential is too great not to solve the compliance challenge. Companies are beginning to adapt and even thrive in today’s highly regulated and transparent environment by implementing systematic approaches to product compliance that are more than functional bandages but revenue-generating engines. Consider partnering with Oracle to help you address your compliance needs. Many of the world’s most innovative leaders and pioneers are leveraging Oracle’s Agile Product Lifecycle Management (PLM) portfolio of enterprise applications to manage the product value chain, centralize product data, automate processes, and launch more eco-friendly products to market faster.   Particularly, the Agile Product Governance & Compliance (PG&C) solution provides out-of-the-box functionality to integrate actionable regulatory information into the enterprise product record from the ideation to the disposal/recycling phase. Agile PG&C makes it possible to efficiently manage compliance per corporate green initiatives as well as regional and global directives. Options are critical, but so is ease-of-use. Anyone who’s grappled with compliance policy knows legal interpretation plays a major role in determining how an organization responds to regulation. Agile PG&C gives you the freedom to configure product compliance per your needs, while maintaining rigorous control over the product record in an easy-to-use interface that facilitates adoption efforts. It allows you to assign regulations as specifications for a part or BOM roll-up. Each specification has a threshold value that alerts you to a non-compliance issue if the threshold value is exceeded. Set however many regulations as specifications you need to make sure a product can be sold in your target countries. Another option is to implement like one of our leading consumer electronics customers and define your own “catch-all” specification to ensure compliance in all markets. You can give your suppliers secure access to enter their component data or integrate a third party’s data. With Agile PG&C you are able to design compliance earlier into your products to reduce cost and improve quality downstream when stakes are higher. Agile PG&C is a comprehensive solution that makes product compliance more reliable and efficient. Throughout product lifecycles, use the solution to support full material disclosures, efficiently manage declarations with your suppliers, feed compliance data into a corrective action if a product must be changed, and swiftly satisfy audits by showing all due diligence tracked in one solution. Given the compounding regulation and consumer focus on urgent environmental issues, now is the time to act. Implementing an enterprise, systematic approach to product compliance is a competitive investment. From the start, Agile Product Governance & Compliance enables companies to confidently design for compliance and sustainability, reduce the cost of compliance, minimize the risk of business interruption, deliver responsible products, and inspire new innovation.  Don’t wait any longer! To find out more about Agile Product Governance & Compliance download the data sheet, contact your sales representative, or call Oracle at 1-800-633-0738. Many thanks to Shane Goodwin, Senior Manager, Oracle Agile PLM Product Management, for contributions to this article. 

    Read the article

  • Rewrite subfolder URL so it appears differently

    - by pg
    I want to change the URL that appears when you go to: MYSUBDOMAIN.MYCOMPANY.COM/carbonated-milk/ to: carbonated-milk.com/ I'm trying to figure out what to put in my .htaccess folder to do this and I've been reading through all kinds of doc files and through other peoples' questions on stackoverflow but can't come up with an answer for this. Do any of you have any idea?

    Read the article

  • PostgreSQL Backup/Restore

    - by Ian
    What's the best way to backup a postgresql database? I've tried using the documentation at www.postgresql.org, but I always get integrity errors when restoring. Right now I'm using this for backup: pg_dump -U myuser -d mydatabase db.pg.dump for restore: pg_restore -c -r -U myuser -d mydatabase db.pg.dump But I'm not getting the desired results..

    Read the article

  • How do I re-enable the IPMI temperature sensors?

    - by NobleUplift
    I've never had a problem reading temperature sensors with ipmitool on my server, but recently the temperature readings started showing up as disabled: # ipmitool sdr list Temp | disabled | ns Temp | disabled | ns Ambient Temp | 21 degrees C | ok CMOS Battery | 0x00 | ok VCORE | 0x00 | ok VDDIO | 0x00 | ok VDDA | 0x00 | ok VTT | 0x00 | ok VCORE | 0x00 | ok VDDIO | 0x00 | ok VDDA | 0x00 | ok VTT | 0x00 | ok VDD 1.2V PG | 0x00 | ok Linear PG | 0x00 | ok I am using OpenIPMI 2.0.19 and ipmitool 1.8.12. How can I re-enable my temperature sensors?

    Read the article

  • Mounting case-sensitive shared folder in VirtualBox

    - by rhettg
    I have an OSX host with a ubuntu VM trying to mount a shared folder. I'm using the options: sudo mount -t vboxsf -o uid=1000,gid=100 pg ~/pg-host The folder mounts fine, however it appears that the mounted directory is case-insensitive, even though my OSX drive is formatted case-sensitive. Are there any options to control this behavior ?

    Read the article

  • L2E many to many query

    - by 5YrsLaterDBA
    I have four tables: Users PrivilegeGroups rdPrivileges LinkPrivilege ----------- ---------------- --------------- --------------- userId(pk) privilegeGroupId(pk) privilegeId(pk) privilegeId(pk, fk) privilegeGroupId(fk) name code privilegeGroupId(pk, fk) L2E will not create LinkPrivilege entity for me. So we only have Users, PrivilegeGroups and rdPrivileges entities. PrivilegeGroups and rdPrivileges are many to many relationship. What I need to do is retrieve all code from rdPrivileges table based on a passed in userId. How can I do it? EDIT working code: var acc = from u in db.Users from pg in db.PrivilegeGroups from p in pg.rdPrivileges where u.UserId == userId && u.PrivilegeGroups.PrivilegeGroupId == pg.PrivilegeGroupId select p.Code;

    Read the article

  • LINQ aggregate / SUM grouping problem

    - by Chrissi
    I'm having trouble getting my head around converting a traditional SQL aggregate query into a LINQ one. The basic data dump works like so: Dim result = (From i As Models.InvoiceDetail In Data.InvoiceDetails.GetAll Join ih As Models.InvoiceHeader In Data.InvoiceHeaders.GetAll On i.InvoiceHeaderID Equals ih.ID Join p As Models.Product In Data.Products.GetAll On i.ProductID Equals p.ID Join pg As Models.ProductGroup In Data.ProductGroups.GetAll On p.ProductGroupID Equals pg.ID Join gl As Models.GLAccount In Data.GLAccounts.GetAll On pg.GLAccountSellID Equals gl.ID Where (gl.ID = GLID) Select ih.Period,i.ExtendedValue) What I need to really be getting out is ih.Period (a value from 1 to 12) and a corresponding aggregate value for i.ExtendedValue. When I try to Group ih I get errors about i being out of scope/context, and I'm not sure how else to go about it.

    Read the article

  • Paging and edit category status

    - by jasmine
    For edit home status of category I have a link: <span class=\"ha\" title=\"Active in Homepage\">Active in Homepage</span><a href=\"?page=homestatus&id={$row['id']}\" class=\"hp\" title=\"\">Passive in home page</a> and function: function homestatus() { $ID = mysql_real_escape_string($_GET['id']); $query = "UPDATE category SET home = 0 WHERE id= $ID "; $result = mysql_query($query); if (mysql_affected_rows () == 1) { header('Location: index.php?page=categories'); } } Everything works fine but there is paging : /index.php?page=categories&pg=2 I want that an item located in pg=2, redirected to index.php?page=categories&pg=2 How can I do this? Thanks in advance

    Read the article

  • SQL 2005 Transaction Rollback Hung–unresolved deadlock

    - by steveh99999
    Encountered an interesting issue recently with a SQL 2005 sp3 Enterprise Edition system. Every weekend, a full database reindex was being run on this system – normally this took around one and a half hours. Then, one weekend, the job ran for over 17 hours  - and had yet to complete... At this point, DBA cancelled the job. Job status is now cancelled – issue over…   However, cancelling the job had not killed the reindex transaction – DBCC OPENTRAN was still showing the transaction being open. The oldest open transaction in the database was now over 17 hours old.  Consequently, transaction log % used growing dramatically and locks still being held in the database... Further attempts to kill the transaction did nothing. ie we had a transaction which could not be killed. In sysprocesses, it was apparent the SPID was in rollback status, but the spid was not accumulating CPU or IO. Was the SPID stuck ? On examination of the SQL errorlog – shortly after the reindex had started, a whole bunch of deadlock output had been produced by trace flag 1222. Then this :- spid5s      ***Stack Dump being sent to   xxxxxxx\SQLDump0042.txt spid5s      * ******************************************************************************* spid5s      * spid5s      * BEGIN STACK DUMP: spid5s      *   12/05/10 01:04:47 spid 5 spid5s      * spid5s      * Unresolved deadlock spid5s      * spid5s      *   spid5s      * ******************************************************************************* spid5s      * ------------------------------------------------------------------------------- spid5s      * Short Stack Dump spid5s      Stack Signature for the dump is 0x000001D7 spid5s      External dump process return code 0x20000001. Unresolved deadlock – don’t think I’ve ever seen one of these before…. A quick call to Microsoft support confirmed the following bug had been hit :- http://support.microsoft.com/kb/961479 So, only option to get rid of the hung spid – to restart SQL Server… Fortunately SQL Server restarted without any issues. I was pleasantly surprised to see that recovery on this particular database was fast. However, restarting SQL Server to fix an issue is not something I would normally rush to do... Short term fix – the reindex was changed to use MAXDOP of 1. Longer term fix will be to apply the correct CU, or wait for SQL 2005 sp 4 ?? This should be released any day soon I hope..

    Read the article

  • Dumping views with mysqldump in the right order.

    - by Bushibytes
    I have a script that backs up our database, which contains multiple tables and views constructed from tables. The command used is: mysqldump -u UserName -ppassword -h hostname DatabaseName dump.sql; I have noticed however that some view definitions are backed up before the definitions of the tables. This causes an issue when restoring using the classic mysql -u UserName -p < dump.sql As when it tries to create the view, the table it needs does not exist yet. It is possible to edit the dump files to be restored, but I was wondering: Is there a way to either make sure that mysqldump backs up the tables and views in the right order? Or is there a way to restore from a dump that will find the right tables to create first (or create sane temporary tables)? Edit for version: mysqldump Ver 10.11 Distrib 5.0.51b, for redhat-linux-gnu (x86_64)

    Read the article

  • oracle sql plus spool

    - by CC
    Hi. I'm using sql plus to execute a query (a select) and dump the result into a file, using spool option. I have about 14 millions lines, and it takes about 12 minutes to do the dump. I was wondering if there is something to make the dump faster? Here below my sql plus options: whenever sqlerror exit sql.sqlcode set pagesize 0 set linesize 410 SET trimspool ON set heading on set feedback off set echo off set termout off spool file_to_dump_into.txt select * from mytable; Thanks.

    Read the article

  • What does * address(found in printf) mean in assembly?

    - by Mask
    Disassembling printf doesn't give much info: (gdb) disas printf Dump of assembler code for function printf: 0x00401b38 <printf+0>: jmp *0x405130 0x00401b3e <printf+6>: nop 0x00401b3f <printf+7>: nop End of assembler dump. (gdb) disas 0x405130 Dump of assembler code for function _imp__printf: 0x00405130 <_imp__printf+0>: je 0x405184 <_imp__vfprintf+76> 0x00405132 <_imp__printf+2>: add %al,(%eax) How is it implemented under the hood? Why disassembling doesn't help? What does * mean before 0x405130?

    Read the article

  • Svn repository split problem

    - by Tuminoid
    I want to split a directory from a large Subversion repository to a repository of its own, and keep the history of the files in that directory. I tried the regular way of doing it first svnadmin dump /path/to/repo > largerepo.dump cat largerepo.dump | svndumpfilter include my/directory >mydir.dump but that does not work, since the directory has been moved and copied over the years and files have been moved into and out of it to other parts of the repository. The result is a lot of these: svndumpfilter: Invalid copy source path '/some/old/path' Next thing I tried is to include those /some/old/path as they appear and after a long, long list of files and directories included, the svndumpfilter completes, BUT importing the resulting dump isn't producing the same files as the current directory has. So, how do I properly split the directory from that repository while keeping the history? EDIT: I specifically want trunk/myproj to be the trunk in a new repository PLUS have the new repository include none of the other old stuff, ie. there should not be possibility for anyone to update to old revision before the split and get/see the files. The svndumpfilter solution I tried would achieve exactly that, sadly its not doable since the path/files have been moved around. The solution by ng isn't accetable since its basically a clone+removal of extras which keeps ALL the history, not just relevant myproj history. BUMP C'moon, there must be someone who definitely knows if this is doable or not, and how!

    Read the article

  • How to write stored procedures to separate files with mysqldump?

    - by Jader Dias
    The mysqldump option --tab=path writes the creation script of each table in a separate file. But I can't find the stored procedures, except in the screen dump. I need to have the stored procedures also in separate files. The current solution I am working on is to split the screen dump programatically. Is there a easier way? The code I am using so far is: mysqldump -p$PASSWORD --routines --skip-dump-date --no-create-info --no-data --skip-opt $DATABASE > $BACKUP_PATH/$DATABASE.sql mysqldump -p$PASSWORD --tab=$BACKUP_PATH --skip-dump-date --no-data --skip-opt $DATABASE

    Read the article

  • How can I make a dashboard with all pending tasks using Celery?

    - by e-satis
    I want to have some place where I can watch all the pendings tasks. I'm not talking about the registered functions/classes as tasks, but the actual scheduled jobs for which I could display: name, task_id, eta, worker, etc. Using Celery 2.0.2 and djcelery, I found `inspect' in the documentation. I tried: from celery.task.control import inspect def get_scheduled_tasks(nodes=None): if nodes: i = inspect(nodes) else: i = inspect() scheduled_tasks = [] dump = i.scheduled() if dump: for worker, tasks in dump: for task in tasks: scheduled_task = {} scheduled_task.update(task["request"]) del task["request"] scheduled_task.update(task) scheduled_task["worker"] = worker scheduled_tasks.append(scheduled_task) return scheduled_tasks But it hangs forever on dump = i.scheduled(). Strange, because otherwise everything works. Using Ubuntu 10.04, django 1.0 and virtualenv.

    Read the article

  • Passing an arbritrary JavaScript object in Xul

    - by Tom Brito
    I'm following this example to pass an object to a window, but when it as an argument it's with "undefined" value. This is my first window (obs. dump is the way to print to console when debug options are turned on): <?xml version="1.0"?> <?xml-stylesheet href="chrome://global/skin/" type="text/css"?> <!DOCTYPE window SYSTEM "chrome://XulWindowArgTest/locale/XulWindowArgTest.dtd"> <window id="windowID" width="400" height="300" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <script> <![CDATA[ function onClickMe(event) { dump("begin\n"); try { var args = { param1: true, param2: 42 }; args.wrappedJSObject = args; var watcher = Components.classes["@mozilla.org/embedcomp/window-watcher;1"].getService(Components.interfaces.nsIWindowWatcher); watcher.openWindow(null, "chrome://XulWindowArgTest/content/about.xul", "windowName", "chrome", args); } catch (e) { dump("error: " + e + "\n"); } dump("end\n"); } ]]> </script> <button label="Click me !" oncommand="onClickMe();" /> </window> and my second window: <?xml version="1.0"?> <?xml-stylesheet href="chrome://global/skin/" type="text/css"?> <!DOCTYPE window SYSTEM "chrome://XulWindowArgTest/locale/XulWindowArgTest.dtd"> <window xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul" onload="onload()"> <script> <![CDATA[ function onload() { dump('arg = ' + window.arguments[0].wrappedJSObject + "\n"); } ]]> </script> <label value="test" /> </window> when the second window loads, it calls the onload and prints: arg = undefined Any idea how to fix it?

    Read the article

  • How to find that 'runas' execution finished?

    - by Radek
    I use ruby 1.9.3p194 (2012-04-20) [i386-mingw32] on Windows7 To do mySQL backup I run runas /savecred /user:yogurt\administrator "cmd.exe /k mysqldump --user=#{dbuser} --password=#{dbpassword} #{dbname} > #{dump}" - mysqldump must be executed as administrator. I do not run my ruby scripts under administrator account. runas starts new cmd.exe and ruby doesn't wait for it to finish. Dump process takes about one minute to finish. After that I zip the dump file and delete it. But I have to make sure that the dump process already finished before I do any other action on that file. Right now I use sleep(60) that works but I wonder if there any better more systematic solution.

    Read the article

  • GC generation 3 appearing in windbg

    - by Johnv2020
    I've a dump file of a process I'm running (trying to find a memory leak) One thing I've noticed is that when I dump the bigger objects via !do windbg tells me that they are GC generation 3 ?? All of these are byte arrays so when I look at all the byte arrays in the dump I can see GC generations 0, 1, 2 & 3. Could someone explain whats going on here as I thought there was only 3 generations of GC.

    Read the article

  • IE9 Loses Some CSS After Particular Form Submit [migrated]

    - by Asherion
    The site I am editing has a search form. For the record, there are several other forms on the site, contact and the like. This is the only one with an issue. Upon submission of the form, SOME of the styling is lost in IE9 (possibly other versions of IE, haven't tested that yet). Primarily, the margins and colors set in html and body appear to have been lost. Menus, banner, text, etc all appear to retain styles. All styles are on one sheet, that are used here... Any helpful advice? Here is the contents of the search page and the php used to check for the form, if that helps, and the css that I think is lost. THE HTML: <div id="search"> <br /> <div style="float:right;font-size:.8em;"> <form name="form_sidesearch" action="search.html" method="post"> <input type="hidden" name="action" value="search" /> <input type="text" name="search_value" value="<?php echo $systems_primary->search_value ?>" /> <input type="submit" name="submit_search" value="Search Website" /> </form> <br /> </div> </div> <?php echo stripslashes($search_results); THE PHP: <?php // -- Begin Search -------------------------------------------------------------------------------------- if($_REQUEST["action"] === "search") { if(strlen($_REQUEST["pg"]) <= 0) { $_REQUEST["pg"] = 1; } $search_results = $systems_primary->search_website("index",urldecode($_REQUEST["search_value"]),"<div class=\"listing ui-corner-all\"><a href=\"{ENTRY_URL}\" title=\"{ENTRY_TITLE}\" class=\"listing_title\">{ENTRY_TITLE}</a>{ENTRY_CONTENT} <a href=\"{ENTRY_URL}\" title=\"{ENTRY_TITLE}\" style=\"font-size:.8em;\">...read more</a></div><br /><br />",345,"all",10,$_REQUEST["pg"]); } // -- End Search ---------------------------------------------------------------------------------------- ?> THE LOST CSS (could be more): html { background-color:#F6E6C8; font-size:16px; font:Helvetica; } body { width:1027px; margin:0 auto; background-color:#ffffff; font: arial, times new roman, sans-serif; }

    Read the article

  • How to downgrade from psql version 9.3.1 to 9.2.4

    - by peeyush singla
    I am building up a rails application that I cloned from my friend. I am using Ubuntu 13.10, rails 3.2.14. I am using a postgresql database and when I try to run rake db:migrate it gives me some error like this: PG::UndefinedObject: ERROR: type "json" does not exist LINE 1: ALTER TABLE "filters" ADD COLUMN "search_string" json I installed pg version :- 9.3.1 but my friend is working on 9.2.4 I don't know why this error is occurring, I tried several times to downgrade using purge or remove commands to remove 9.3.1 all goes well but when I check psql --version it again shows me 9.3.1 . Any solution ???

    Read the article

  • How to force user to use subdomain?

    - by David Stockinger
    I am hosting a webshop with OpenCart and its current URL is e.g. http://mydomain.com/shop/ I have created two subdomains ( http://pg.mydomain.com/ and http://shop.mydomain.com/ ) and both subdomains are already working as they should. However, can I restrict direct access to mydomain.com/shop/ while leaving all the files (index.php, etc.) there? Since both subdomains are pointing to http://mydomain.com/shop/, I thought this would restrict all access. So in the end, I would like my two shops to be accessable through http://pg.mydomain.com/ and http://shop.mydomain.com/, but not http://mydomain.com/shop/ while leaving all the files in http://mydomain.com/shop/.

    Read the article

  • how do I set quad buffering with jogl 2.0

    - by tony danza
    I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem. I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library. Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering. Here's the previous code: public class Theatre extends PGraphicsOpenGL{ protected void allocate() { if (context == null) { // If OpenGL 2X or 4X smoothing is enabled, setup caps object for them GLCapabilities capabilities = new GLCapabilities(); // Starting in release 0158, OpenGL smoothing is always enabled if (!hints[DISABLE_OPENGL_2X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(2); } else if (hints[ENABLE_OPENGL_4X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); } capabilities.setStereo(true); // get a rendering surface and a context for this canvas GLDrawableFactory factory = GLDrawableFactory.getFactory(); drawable = factory.getGLDrawable(parent, capabilities, null); context = drawable.createContext(null); // need to get proper opengl context since will be needed below gl = context.getGL(); // Flag defaults to be reset on the next trip into beginDraw(). settingsInited = false; } else { // The following three lines are a fix for Bug #1176 // http://dev.processing.org/bugs/show_bug.cgi?id=1176 context.destroy(); context = drawable.createContext(null); gl = context.getGL(); reapplySettings(); } } } This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre"). Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying: PGraphicsOpenGL pg = ((PGraphicsOpenGL)g); pgl = pg.beginPGL(); gl = pgl.gl; glu = pg.pgl.glu; gl2 = pgl.gl.getGL2(); GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); capabilities.setStereo(true); GLDrawableFactory factory = GLDrawableFactory.getFactory(profile); If I go on, I should do something like this: drawable = factory.getGLDrawable(parent, capabilities, null); but drawable isn't a field anymore and I can't find a way to do it. How do I set quad buffering? If I try this: gl2.glDrawBuffer(GL.GL_BACK_RIGHT); it obviously doesn't work :/ Thanks.

    Read the article

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • OSB, Service Callouts and OQL - Part 2

    - by Sabha
    This section of the "OSB, Service Callouts and OQL" blog posting will delve into thread dump analysis of OSB server and detecting threading issues relating to Service Callout using ThreadLogic. We would also use Heap Dump and OQL to identify the related Proxies and Business services. The previous section dealt with threading model used by OSB to handle Route and Service Callouts. Please refer to the blog post for more details.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >