Search Results

Search found 20931 results on 838 pages for 'mysql insert'.

Page 725/838 | < Previous Page | 721 722 723 724 725 726 727 728 729 730 731 732  | Next Page >

  • SQL SERVER – Index Created on View not Used Often – Observation of the View – Part 2

    - by pinaldave
    Earlier, I have written an article about SQL SERVER – Index Created on View not Used Often – Observation of the View. I received an email from one of the readers, asking if there would no problems when we create the Index on the base table. Well, we need to discuss this situation in two different cases. Before proceeding to the discussion, I strongly suggest you read my earlier articles. To avoid the duplication, I am not going to repeat the code and explanation over here. In all the earlier cases, I have explained in detail how Index created on the View is not utilized. SQL SERVER – Index Created on View not Used Often – Limitation of the View 12 SQL SERVER – Index Created on View not Used Often – Observation of the View SQL SERVER – Indexed View always Use Index on Table As per earlier blog posts, so far we have done the following: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View However, the blog reader who emailed me suggests the extension of the said logic, which is as follows: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View Create Index on the Base Table Write SELECT with ORDER BY on View After doing the last two steps, the question is “Will the query on the View utilize the Index on the View, or will it still use the Index of the base table?“ Let us first run the Create example. USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO -- Create Index on Original Table -- On Column ID1 CREATE UNIQUE CLUSTERED INDEX [IX_OriginalTable] ON mySampleTable ( ID1 ASC ) GO -- On Column ID2 CREATE UNIQUE NONCLUSTERED INDEX [IX_OriginalTable_ID2] ON mySampleTable ( ID2 ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO Now let us see the execution plans for both of the SELECT statement. Before Index on Base Table (with Index on View): After Index on Base Table (with Index on View): Looking at both executions, it is very clear that with or without, the View is using Indexes. Alright, I have written 11 disadvantages of the Views. Now I have written one case where the View is using Indexes. Anybody who says that I am being harsh on Views can say now that I found one place where Index on View can be helpful. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL View, SQLServer, T SQL, Technology

    Read the article

  • Sun Ray 3 Plus Appliance Announced

    - by [email protected]
    There were many of you out there wondering if Oracle was going to keep and add to the Sun Ray and Sun virtualized desktop product suite, there have been a number of affirmative statements over the last many months. However, none of them resound like this; the introduction of a new product pretty much proves the point. A couple minutes before 3:00, local time yesterday, Oracle announced the release of a new Sun Ray, appliance, the Sun Ray 3 Plus. This is the unit that will replace the SR 2 FS (which has been for sale now since the middle of last decade).  Physically it is about the same size as the 2 FS but there are some significant differences... As you can see there is no smart card reader in the front - that has moved to the top to ensure only one hand is required to insert the card.  There is also a larger surround on the card reader that lights up to show the user the card is being read (properly).  A new power on/off switch is on the front which essentially brings power consumption to ~0 watts, but there is also a new 'sleep' timer looking for 30 minutes of inactivity and then will drop the power consumption down to ~ 1watt. There are also 2 USB 2.0 ports are accessible on the front instead of one.  The standard mic in and headphone out ports are there as well.  There is even more interesting stuff on the back. From the top down there are two more USB 2.0 ports for a total of four, but then the Oracle "Peripheral Kit" keyboard includes a 3-port USB Hub, too.  There's a 10/100/1000 Ethernet port as well as a 1000 Mb SFP port.  Standard DB-9 Serial port and then two DVI ports.  Then there is the really big news.  Two DVI ports driving 2560 x 1600 resolution, each. Most PCs can't do that without adding an adapter card.Now the images I have here are ones taken on a prototype a couple months back.  They are essentially the same as the Production unit, but if you would like to see an image of the Production Sun Ray 3 Plus unit you can see one here. There is a full data sheet available here. So this is the first Oracle Sun Ray desktop appliance.  Proof that the product line lives on.  A very good start!

    Read the article

  • Ghost Records, Backups, and Database Compression…With a Pinch of Security Considerations

    - by Argenis
      Today Jeffrey Langdon (@jlangdon) posed on #SQLHelp the following questions: So I set to answer his question, and I said to myself: “Hey, I haven’t blogged in a while, how about I blog about this particular topic?”. Thus, this post was born. (If you have never heard of Ghost Records and/or the Ghost Cleanup Task, go see this blog post by Paul Randal) 1) Do ghost records get copied over in a backup? If you guessed yes, you guessed right. The backup process in SQL Server takes all data as it is on disk – it doesn’t crack the pages open to selectively pick which slots have actual data and which ones do not. The whole page is backed up, regardless of its contents. Even if ghost cleanup has run and processed the ghost records, the slots are not overwritten immediately, but rather until another DML operation comes along and uses them. As a matter of fact, all of the allocated space for a database will be included in a full backup. So, this poses a bit of a security/compliance problem for some of you DBA folk: if you want to take a full backup of a database after you’ve purged sensitive data, you should rebuild all of your indexes (with FILLFACTOR set to 100%). But the empty space on your data file(s) might still contain sensitive data! A SHRINKFILE might help get rid of that (not so) empty space, but that might not be the end of your troubles. You might _STILL_ have (not so) empty space on your files! One approach that you can follow is to export all of the data on your database to another SQL Server instance that does NOT have Instant File Initialization enabled. This can be a tedious and time-consuming process, though. So you have to weigh in your options and see what makes sense for you. Snapshot Replication is another idea that comes to mind. 2) Does Compression get rid of ghost records (2008)? The answer to this is no. The Ghost Records/Ghost Cleanup Task mechanism is alive and well on compressed tables and indexes. You can prove this running a simple script: CREATE DATABASE GhostRecordsTest GO USE GhostRecordsTest GO CREATE TABLE myTable (myPrimaryKey int IDENTITY(1,1) PRIMARY KEY CLUSTERED,                       myWideColumn varchar(1000) NOT NULL DEFAULT 'Default string value')                         ALTER TABLE myTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE) GO INSERT INTO myTable DEFAULT VALUES GO 10 DELETE myTable WHERE myPrimaryKey % 2 = 0 DBCC TRACEON(2514) DBCC CHECKTABLE(myTable) TraceFlag 2514 will make DBCC CHECKTABLE give you an extra tidbit of information on its output. For the above script: “Ghost Record count = 5” Until next time,   -Argenis

    Read the article

  • SQL SERVER – Index Created on View not Used Often – Observation of the View

    - by pinaldave
    I always enjoy writing about concepts on Views. Views are frequently used concepts, and so it’s not surprising that I have seen so many misconceptions about this subject. To clear such misconceptions, I have previously written the article SQL SERVER – The Limitations of the Views – Eleven and more…. I also wrote a follow up article wherein I demonstrated that without even creating index on the basic table, the query on the View will not use the View. You can read about this demonstration over here: SQL SERVER – Index Created on View not Used Often – Limitation of the View 12. I promised in that post that I would also write an article where I would demonstrate the condition where the Index will be used. I got many responses suggesting that I can do that with using NOEXPAND; I agree. I have already written about this in my original summary article. Here is a way for you to see how Index created on View can be utilized. We will do the following steps on this exercise: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO When we check the execution plan for this , we find it clearly that the Index created on the View is utilized. ORDER BY clause uses the Index created on the View. I hope this makes the puzzle simpler on how the Index is used on the View. Again, I strongly recommend reading my earlier series about the limitations of the Views found here: SQL SERVER – The Limitations of the Views – Eleven and more…. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL View, T SQL, Technology

    Read the article

  • Good fix vs Quick fix [duplicate]

    - by Andrea Girardi
    This question already has an answer here: Does craftsmanship pay off? [duplicate] 16 answers Good design: How much hackyness is acceptable? [duplicate] 9 answers How do you balance between “do it right” and “do it ASAP” in your daily work? 14 answers Let's start from this principle: quality is a feature that you can't add to a project in the middle of the development process. This is the scenario: two weeks to go live with my project and, one of the developers added a specific method used only for one web application to our framework (Our framework is a bounce of java classes used to extract content from MongoDB, Alfresco, mySql and it's used by web applications). I'm the team leader and I told him to generalize the method to keep the framework to keep reusable but he said "no, I prefer don't do that because there are a lot of bugs that need to be fixed". The manager is agree with him and of course I'm not. Is it better to made extra effort to keep a framework free from any specific implementation (probably used only by one web application) or just add the methods because it works? So, my question is: is it correct to write code that only works or is better to write code that works but it doesn't sucks (i.e. adding embedded value, specific methods, extra classes, add column to database, etc)? How is it possible to justify the extra time (to be honest, this kind of fix requires 10 minutes extra to write a good generic code) to the management? How is possible to argue it's the right way to write code to young developers and PM? in general, good fix or quick fix? Ah, 10 minutes after I get the email from PM, he asked me why on a url of application 2 there was the name of application 1 during the login? I like to quote Jeff Atwood: "Don't leave "broken windows" (bad designs, wrong decisions, or poor code) unrepaired. Fix each one as soon as it is discovered. " Excerpt From: Hyperink. "How-To-Stop-Sucking-And-Be-Awesome-Instead." iBooks.

    Read the article

  • What's a good scheme for multi-user database synchronization?

    - by Mason Wheeler
    I'm working on a system to allow multiple users to collaborate on an online project. Everything is fairly straightforward, except for keeping the users in sync. Each user has their own local copy of the project database, which allows them to make changes and test things out, and then send the updates to the central server. But this runs into the classic synchronization question: how do you keep two users from editing the same thing and stomping each other's work? I've got an idea that should work, but I wonder if there's a simpler way to do it. Here's the basic concept: All project data is stored in a relational database. Each row in the database has an owner. If the current user is not the owner, he can read but not write that row. (This is enforced client-side.) The user can send a request to the server to take ownership of a row, which will be granted if the server's copy says that the current owner is NULL, or to release ownership when they're done with it. It is not possible to release ownership without committing changes to the server. It is not possible to commit changes to the server without having first downloaded all outstanding changes to the server. When any changes are made to rows you own, a trigger marks that row as Dirty. When you commit changes, the database is scanned for all Dirty rows in all tables, and the data is serialized into an update file, which is posted to the server, and all rows are marked Clean. The server applies the updates on its end, and keeps the file around. When other users download changes, the server sends them the update files that they haven't already received. So, essentially this is a reinvention of version control on a relational database. (Sort of.) As long as taking ownership and applying updates to the server are guaranteed atomic changes, and the server verifies that some smart-aleck user didn't edit their local database so they could send an update for a row they don't have ownership of, it should be guaranteed to be correct, and with no need to worry about merges and merge conflicts. (I think.) Can anyone think of any problems with this scheme, or ways to do it better? (And no, "build [insert VCS here] into your project" is not what I'm looking for. I've thought of that already. VCSs work well with text, and not so well with other file formats, such as relational databases.)

    Read the article

  • Announcing Oracle Enterprise Content Management Suite 11g

    - by [email protected]
    Today Oracle announced Oracle Enterprise Content Management Suite 11g. This is a major release for us, and reinforces our three key themes at Oracle: Complete New in this release - Oracle ECM Suite 11g is built on a single, unified repository. Every piece of content - documents, HTML pages, digital assets, scanned images - is stored and accessbile directly from the repository, whether you are working on websites, creating brand logos, processing accounts payable invoices, or running records and retention functions. It makes complete, end-to-end management of content possible, from the point it enters the organization, through its entire lifecycle. Also new in this release, the installation, access, monitoring and administration of Oracle ECM Suite 11g is centralized. As a complete system, organizations can lower the costs of training and usage by having a centralized source of information that is easily administered. As part of this new unified repository release, Oracle has released a benchmarking white paper that shows the extreme performance and scalability of Oracle ECM Suite. When tested on a two node UCM Server running on Sun Oracle DB Machine Half Rack Hardware with an Exadata storage server, Oracle ECM Suite 11g is able to ingest over 178 million documents per day. Open Oracle ECM Suite 11g is built on a service-oriented architecture. All functions are available through standards-based services calls in Web Services or Java. In this release Oracle unveils Open Web Content Management. Open Web Content Management is a revolutionary approach to web content management that decouples the content management process from the process of creating web applications. One piece of this approach is our one-click web content management. With one click, a web application builder can drag content services into their application, enabling their users to also edit content with just one click. Open Web Content Management is also open because it enables Web developers to add Web content management to new and existing JavaServer Pages (JSP), JavaServer Faces (JSF) and Oracle Application Development Framework (ADF) Faces applications Open content distribution - Oracle ECM Suite 11g offers flexible deployment options with a built-in smart cache so organizations can deliver Web sites or Web applications without requiring Oracle ECM Suite as part of the delivery system Integrated Oracle ECM Suite 11g also offers a series of next generation desktop integrations, providing integrations such as: New MS Office integration with menus to access managed content, insert managed links, and compare managed documents using standard MS Office reviewing tools Automatic identity tagging of documents on download - to help users understand which versions they are viewing and prevent duplicate content items in the content repository. New "smart productivity folders" to show a users workflow inbox, saved searches and checked out content directly from Windows Explorer Drag and drop metadata pop-ups Check in and check out for all file formats with any standard WebDAV server As part of Oracle's Enterprise Application Documents initiative, Oracle Content Management 11g also provides certified application integrations with solution templates You can read the press release here. You can see more assets at the launch center here. You can sign up for the announcement webinar and hear more about the new features here. You can read the benchmarking study here.

    Read the article

  • XEROX Phaser 3160N installation on UBUNTU 12.04 LTE machine

    - by Greg Verrall
    I have recently had windows XP die on one of my machines, and have installed Linux UBUNTU. The OS works great, except for installing the Xerox Phaser 3160N printer. The OS can find and install the network printer, but when I print a test page, it tells me “Internal Error – Please use the correct driver”. I have the correct drivers, as your support team have sent me the link, (http://www.support.xerox.com/support/phaser-3160/file-download/enau.html?operatingSystem=linux&fileLanguage=en_GB&contentId=105724&from=downloads&viewArchived=false) but I cannot install these drivers to run the printer. These are the instructions from the online guide for installing on a Linux machine: 1. Make sure that the machine is connected to your network and powered on. Also, your machine’s IP address should have been set. 2. Insert the supplied software CD into your CD-ROM drive. 3. Double-click CD-ROM icon that appears on your Linux desktop. 4. Double-click the Linux folder. 5. Double-click the install.sh icon. 6. The Xerox Installer window opens. Click Continue. 7. The Add printer wizard window opens. Click Next. 8. Select Network printer and click Search button. 9. The Printer’s IP address and model name appears on list field. 10. Select your machine and click Next. I get as far as step 5, and step 6 never happens, if it did, it would be very easy from there. There are options to add additional software to UBUNTU, however it does not recognise the installation CD as valid when I try to add it as a source. Any ideas on who can help me? regards, Greg Verrall

    Read the article

  • Shutdown Hangs for 5 Minutes on Kubuntu 14.04

    - by Augustinus
    I've had persistent problems with a 5 minute hang at shutdown for the last three versions of Kubuntu (13.04, 13.10, and now 14.04). I suspect this is not a KDE-specific problem. Recently, I performed a fresh installation of Kubuntu 14.04 from a live-USB, and shutdown worked normally for about a week. The hang-up is now happening again, and I can't figure out why. A brief description of the problem: The hang-up occurs with all methods of initiating a normal shutdown: Clicking the shutdown or restart button in KDE, sudo shutdown -h now, sudo reboot The shutdown splash screen appears. Using the down-arrow to access verbose messages, I see "Asking all remaining processes to terminate." This message remains for 5 minutes with no disk activity. Finally, a rapid series of messages flurries to the screen: * All processes ended within 300 seconds... [ OK ] nm-dispatcher.action: Caught signal 15, shutting down... ModemManager[852]: <warn> Could not acquire the 'org.freedesktop.ModemManager1' service name ModemManager[852]: <info> ModemManager is shut down * Deactivating swap... [ OK ] * Unmounting local filesystems... [ OK ] * Will now restart` Possible Sources of the Problem: Before the problem re-appeared, I have mainly been doing routine computing. I have kept the system up-to-date using apt-get upgrade and apt-get dist-upgrade. The only other notable incident was a power failure. I do not have the computer connected to a UPS, so the power failure resulted in an immediate shutdown. Could this have corrupted an important file which must be accessed at shutdown? Is there any way that could cause a 5-minute hang-up? Here is a list of packages that have been updated before the problem appeared: bash iotop dpkg dpkg-dev python3-software-properties libdpkg-perl software-properties-kde software-properties-common akonadi-backend-mysql libakonadiprotocolinternals1 akonadi-server firefox-locale-en firefox flashplugin-installer libqapt2 libqapt2-runtime thunderbird openjdk-7-jre-headless thunderbird-locale-en kubuntu-driver-manager qapt-deb-installer openjdk-7-jre qapt-batch icedtea-7-jre-jamvm libelf1 dpkg dpkg-dev libdpkg-perl libjbig0 gettext-base libgettextpo-dev libssl1.0.0 libgettextpo0 libasprintf-dev linux-headers-3.13.0-24 gettext libasprintf0c2 linux-headers-3.13.0-24-generic openssl linux-libc-dev gstreamer0.10-qapt kubuntu-desktop linux-image-extra-3.13.0-24-generic linux-image-3.13.0-24-generic I would appreciate any help with this.

    Read the article

  • SQL SERVER – Select and Delete Duplicate Records – SQL in Sixty Seconds #036 – Video

    - by pinaldave
    Developers often face situations when they find their column have duplicate records and they want to delete it. A good developer will never delete any data without observing it and making sure that what is being deleted is the absolutely fine to delete. Before deleting duplicate data, one should select it and see if the data is really duplicate. In this video we are demonstrating two scripts – 1) selects duplicate records 2) deletes duplicate records. We are assuming that the table has a unique incremental id. Additionally, we are assuming that in the case of the duplicate records we would like to keep the latest record. If there is really a business need to keep unique records, one should consider to create a unique index on the column. Unique index will prevent users entering duplicate data into the table from the beginning. This should be the best solution. However, deleting duplicate data is also a very valid request. If user realizes that they need to keep only unique records in the column and if they are willing to create unique constraint, the very first requirement of creating a unique constraint is to delete the duplicate records. Let us see how to connect the values in Sixty Seconds: Here is the script which is used in the video. USE tempdb GO CREATE TABLE TestTable (ID INT, NameCol VARCHAR(100)) GO INSERT INTO TestTable (ID, NameCol) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Second' UNION ALL SELECT 4, 'Second' UNION ALL SELECT 5, 'Second' UNION ALL SELECT 6, 'Third' GO -- Selecting Data SELECT * FROM TestTable GO -- Detecting Duplicate SELECT NameCol, COUNT(*) TotalCount FROM TestTable GROUP BY NameCol HAVING COUNT(*) > 1 ORDER BY COUNT(*) DESC GO -- Deleting Duplicate DELETE FROM TestTable WHERE ID NOT IN ( SELECT MAX(ID) FROM TestTable GROUP BY NameCol) GO -- Selecting Data SELECT * FROM TestTable GO DROP TABLE TestTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Delete Duplicate Records – Rows SQL SERVER – Count Duplicate Records – Rows SQL SERVER – 2005 – 2008 – Delete Duplicate Rows Delete Duplicate Records – Rows – Readers Contribution Unique Nonclustered Index Creation with IGNORE_DUP_KEY = ON – A Transactional Behavior What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • SQL SERVER – Rename Columnname or Tablename – SQL in Sixty Seconds #032 – Video

    - by pinaldave
    We all make mistakes at some point of time and we all change our opinion. There are quite a lot of people in the world who have changed their name after they have grown up. Some corrected their parent’s mistake and some create new mistake. Well, databases are not protected from such incidents. There are many reasons why developers may want to change the name of the column or table after it was initially created. The goal of this video is not to dwell on the reasons but to learn how we can rename the column and table. Earlier I have written the article on this subject over here: SQL SERVER – How to Rename a Column Name or Table Name. I have revised the same article over here and created this video. There is one very important point to remember that by changing the column name or table name one creates the possibility of errors in the application the columns and tables are used. When any column or table name is changed, the developer should go through every place in the code base, ad-hoc queries, stored procedures, views and any other place where there are possibility of their usage and change them to the new name. If this is one followed up religiously there are quite a lot of changes that application will stop working due to this name change.  One has to remember that changing column name does not change the name of the indexes, constraints etc and they will continue to reference the old name. Though this will not stop the show but will create visual un-comfort as well confusion in many cases. Here is my question back to you – have you changed ever column name or table name in production database (after project going live)? If yes, what was the scenario and need of doing it. After all it is just a name. Let me know what you think of this video. Here is the updated script. USE tempdb GO CREATE TABLE TestTable (ID INT, OldName VARCHAR(20)) GO INSERT INTO TestTable VALUES (1, 'First') GO -- Check the Tabledata SELECT * FROM TestTable GO -- Rename the ColumnName sp_RENAME 'TestTable.OldName', 'NewName', 'Column' GO -- Check the Tabledata SELECT * FROM TestTable GO -- Rename the TableName sp_RENAME 'TestTable', 'NewTable' GO -- Check the Tabledata - Error SELECT * FROM TestTable GO -- Check the Tabledata - New SELECT * FROM NewTable GO -- Cleanup DROP TABLE NewTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – How to Rename a Column Name or Table Name What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • How to set conditional activation to taskflows?

    - by shantala.sankeshwar(at)oracle.com
    This article describes implementing conditional activation to taskflows.Use Case Description Suppose we have a taskflow dropped as region on a page & this region is enclosed in a popup .By default when the page is loaded the respective region also gets loaded.Hence a region model needs to provide a viewId whenever one is requested.  A consequence of this is the TaskFlowRegionModel always has to initialize its task flow and execute the task flow's default activity in order to determine a viewId, even if the region is not visible on the page.This can lead to unnecessary performance overhead of executing task flow to generate viewIds for regions that are never visible. In order to increase the performance,we need to set the taskflow bindings activation property to 'conditional'.Below described is a simple usecase that shows how exactly we can set the conditional activations to taskflow bindings.Steps:1.Create an ADF Fusion web ApplicationView image 2.Create Business components for Emp tableView image3.Create a view criteria where deptno=:some_bind_variableView image4.Generate EmpViewImpl.java file & write the below code.Then expose this to client interface.    public void filterEmpRecords(Number deptNo){            // Code to filter the deptnos         ensureVariableManager().setVariableValue("some_bind_variable",  deptNo);        this.applyViewCriteria(this.getViewCriteria("EmpViewCriteria"));        this.executeQuery();       }5.Create an ADF Taskflow with page fragements & drop the above method on the taskflow6.Also drop the view activity(showEmp.jsff) .Define control flow case from the above method activity to the view activity.Set the method activity as default activityView image7.Create  main.jspx page & drop the above taskflow as region on this pageView image8.Surround the region with the dialog & surround the dialog with the popup(id is Popup1)9.Drop the commandButton on the above page & insert af:showPopupBehavior inside the commandButton:<af:commandButton text="show popup" id="cb1"><af:showPopupBehavior popupId="::Popup1"/></af:commandButton>10.Now if we execute this main page ,we will notice that the method action gets called even before the popup is launched.We can avoid this this by setting the activation property of the taskflow to conditional11.Goto the bindings of the above main page & select the taskflow binding ,set its activation property to 'conditional' & active property to Boolean value #{Somebean.popupVisible}.By default its value should be false.View image12.We need to set the above Boolean value to true only when the popup is launched.This can be achieved by inserting setPropertyListener inside the popup:<af:setPropertyListener from="true" to="#{Somebean.popupVisible}" type="popupFetch"/>13.Now if we run the page,we will notice that the method action is not called & only when we click on 'show popup' button the method action gets called.

    Read the article

  • Add Transitions to Slideshows in PowerPoint 2010

    - by DigitalGeekery
    Sitting through PowerPoint presentation can sometimes get a little boring. You can make your slideshows more interesting by adding transitions between the slides in your presentations. Transitions certainly aren’t new to PowerPoint, but Office 2010 adds a number of exciting new transitions and options. Add Transitions Select the slide to which you want to apply a transition. On the Transitions tab, select the More button to reveal the all transition options in the gallery.   Select the transition you’d like to apply to your slide. The transitions are divided into three types…Subtle, Exciting, and Dynamic Content. You can hover your mouse over each item in the gallery to preview the transition with Live Preview. You can adjust many of the transitions using Effect Options. The options will vary depending on which transition you’ve selected.   You can add additional customizations in the Timing Group. You can add sound by selecting one of the options in the Sound dropdown list…   You can change the duration of the transition… Or choose to advance the slide On Mouse Click (default) or automatically after a certain period of time.   If you’d like to apply one transition to every slide in your presentation, select the Apply To All button. You can preview your transition by clicking the Preview button on the Transitions tab. A few clicks is all it takes to add a little energy and excitement to an otherwise dry presentation.   Are you looking for more ways to spice up your PowerPoint 2010 slideshows? You could try adding animation to text and images, or adding video from the web. Similar Articles Productive Geek Tips Insert Tables Into PowerPoint 2007Bring Office 2003 Menus Back to 2010 with UBitMenuEmbed True Type Fonts in Word and PowerPoint 2007 DocumentsHow to Add Video from the Web in PowerPoint 2010Add Artistic Effects to Your Pictures in Office 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide

    Read the article

  • Site Search Engine for 1,000 page website

    - by Ian
    I manage a website with about 1,000 articles that need to be searchable by my members. The site search engines I've tried all had their own problems: Fluid Dynamics Search Engine Since it's written in perl, it was a bit hacky to integrate with my PHP-based CMS. I basically had to file_get_contents the search results page. However, FDSE had the best search results. Google CSE Ugh, the search results SUCK. It can't find documents even using unique strings. I'm so surprised that a Google search product is this bad. Nor can I get any answers on their 'help' forums, and I am a paying user. Boo, Google. Boo. Sphider Again, bad search results. Unable to locate some phrases used in link text. Better results than Google CSE though. Shame on Google that a free PHP script has better search results than their paid application. IndexTank This one looked really promising. I got all set up with their PHP API client. But it would only randomly add articles that I submitted. Out of 700+ articles I pushed to the index through their API, only 8 made it in. Unable to find any help on this subject. Update for IndexTank -- Got the above issue fixed, so this looks most promising so far. The site itself runs on php/mysql and FreeBSD, though this shouldn't matter for a web crawling indexer. I've looked at Lucene, but I don't know anything about Java or installing Java programs on my web server. I also do not have root access on my web server, if this would be required for installation. I really don't need a lot of fancy features. It just needs to be able to crawl my web site and return great (even decent!) search results. I don't need any crazy search operators. It doesn't need to index off my primary domain. It just needs to work! Thanks, Hive Mind!

    Read the article

  • BAM design pointers

    - by Kavitha Srinivasan
    In working recently with a large Oracle customer on SOA and BAM, I discovered that some BAM best practices are not quite well known as I had always assumed ! There is a doc bug out to formally incorporate those learnings but here are a few notes..  EMS-DO parity When using EMS (Enterprise Message Source) as a BAM feed, the best practice is to use one EMS to write to one Data Object. There is a possibility of collisions and duplicates when multiple EMS write to the same row of a DO at the same time. This customer had 17 EMS writing to one DO at the same time. Every sensor in their BPEL process writes to one topic but the Topic was read by 1 EMS corresponding to one sensor. They then used XSL within BAM to transform the payload into the BAM DO format. And hence for a given BPEL instance, 17 sensors fired, populated 1 JMS topic, was consumed by 17 EMS which in turn wrote to 1 DataObject.(You can image what would happen for later versions of the application that needs to send more information to BAM !).  We modified their design to use one Master XSL based on sensorname for all sensors relating to a DO- say Data Object 'Orders' and were able to thus reduce the 17 EMS to 1 with a master XSL. For those of you wondering about how squeaky clean this design is, you are right ! This is indeed not squeaky clean and that brings us to yet another 'inferred' best practice. (I try very hard not to state the obvious in my blogs with the hope that everytime I blog, it is very useful but this one is an exception.) Transformations and Calculations It is optimal to do transformations within an engine like BPEL. Not only does this provide modelling ease with a nice GUI XSL mapper in JDeveloper, the XSL engine in BPEL is quite efficient at runtime as well. And so, doing XSL transformations in BAM is not quite prudent.  The same is true for any non-trivial calculations as well. It is best to do all transformations,calcuations and sanitize the data in a BPEL or like layer and then send this to BAM (via JMS, WS etc.) This then delegates simply the function of report rendering and mechanics of real-time reporting to the Oracle BAM reporting tool which it is most suited to do. All nulls are not created equal Here is yet another possibly known fact but reiterated here. For an EMS with an Upsert operation: a) If Empty tags or tags with no value are sent like <Tag1/> or <Tag1></Tag1>, the DO will be overwritten with --null-- b) If Empty tags are suppressed ie not generated at all, the corresponding DO field will NOT be overwritten. The field will have whatever value existed previously.  For an EMS with an Insert operation, both tags with an empty value and no tags result in –null-- being written to the DO. Hope this helps .. Happy 4th!

    Read the article

  • What is causing my spacebar to randomly stop working?

    - by Chris Billington
    A couple of times a day, I'll be typing something and realise I can't type spaces. Usually the cursor will flicker instead when I press the spacebar, and I can type all other letters as far as I can tell. If I'm in a terminal the cursor turns from a solid square to an empty square until I release the spacebar. For some reason, restarting compiz with alt-F2 compiz fixes it, until it next occurs. I can still copy and paste spaces from sources that already have them, and I can still insert spaces with ctrl-shift-u, 20, enter. This has been happening for a while, since before I upgraded to maverick, but it feels like its beceoming more frequent. There really doesn't seem to be any kind of a pattern to it. I'm using 64 bit ubuntu 10.10 on a system76 panp7 laptop. Any ideas how I might troubleshoot? EDIT: using xev, normally a spacebar registers as: KeyPress event, serial 36, synthetic NO, window 0x5600001, root 0x101, subw 0x0, time 26488647, (88,403), root:(748,458), state 0x10, keycode 65 (keysym 0x20, space), same_screen YES, XLookupString gives 1 bytes: (20) " " XmbLookupString gives 1 bytes: (20) " " XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x5600001, root 0x101, subw 0x0, time 26488729, (88,403), root:(748,458), state 0x10, keycode 65 (keysym 0x20, space), same_screen YES, XLookupString gives 1 bytes: (20) " " XFilterEvent returns: False But when it's stopped behaving a press of the spacebar instead gives the three events: FocusOut event, serial 36, synthetic NO, window 0x5600001, mode NotifyGrab, detail NotifyAncestor FocusIn event, serial 36, synthetic NO, window 0x5600001, mode NotifyUngrab, detail NotifyAncestor KeymapNotify event, serial 36, synthetic NO, window 0x0, keys: 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 FURTHER EDIT: Ok, so I think I've solved the problem, and by that I mean I now know which package to file a bug against. I have a hot corner which initiates a window picker, and I've customised the window picking so that left click goes to a window, right click closes one and spacebar zooms in on one. When I go to this hot corner, compiz must take control of my spacebar, and clearly isn't giving it back when I leave the window picker. So I'll be filing a bug against compiz. reported:here

    Read the article

  • Oracle and Eloqua Welcome Compendium’s Content Marketing

    - by Mike Stiles
    Yesterday, Oracle announced its acquisition of Compendium, a cloud-based content marketing provider that helps companies plan, produce and deliver engaging content across multiple channels throughout their customers' lifecycle. Why? Because every part of the above paragraph speaks to where modern marketing is and where it’s headed. Customers have now been empowered, thanks to the Internet and particularly social, with access to almost limitless amounts of information about companies and products. This includes the especially influential voices of friends and objective acquaintances that have experience with the product or brand. With mobile, this info is available instantly in the palm of their hand. All of this research and influence mind you, is taking place long before a prospect will ever engage with the brand itself or one of its sales reps. So how does a brand effectively insert itself into these conversations and this flow of the customer journey? Now, more than ever, marketers must deliver relevant and engaging content across multiple channels and throughout the entire customer journey to be useful, helpful, and influential. Compendium has a data-driven content marketing platform that lines up relevant content with customer data and personas so brands can accelerate the conversion of prospects. Now think about combining that with the Oracle Eloqua Marketing Cloud, part of Oracle's comprehensive CX solution. Marketers will be able to automate content delivery across channels by aligning persona-based content with customers' digital body language. Better customer engagement, improved sales lead quality, better return on marketing investment, and higher customer loyalty. Now we’re talking. Does data-driven content marketing have an impact? Compendium customer CVENT is a SaaS company specializing in meetings management tech. They wanted to increase leads & ad performance on their blog and dramatically increase their content. They also wanted to manage the creation, workflow, promotion and distribution of that content. With Compendium, CVENT created over 9,000 content elements, and sales-ready leads grew 325%. So Oracle Eloqua helps you target audiences, know buyers, and automate multi-channel marketing campaigns. Compendium lets you plan, publish, manage and measure content across content types and channels. Now kick it up yet another notch with Oracle’s Analytics, Big Data and Social solutions, and you’re using your marketing dollars to reach the right people in the right place at the right time with the right content. And as if that weren’t enough, your customers will love you for it. @mikestiles

    Read the article

  • View and Flip Between Firefox Tabs in 3D

    - by Asian Angel
    Are you tired of the default tab switching style in Firefox? Then get ready to enjoy a more visually pleasing 3D experience with the FoxTab extension. Using FoxTab As soon as you have the extension installed, you will see a new toolbar button available beside the address bar. Before going further you may want to look through the viewing styles available in the lower right corner. Note: You can choose to have the FoxTab button appear in the status bar if preferred or use the keyboard (i.e. F12) by itself to launch FoxTab. The grid view with an angled 3D setting. The page flow view with a more frontal look. If the default background color is not to your liking then you can easily change to a new color or insert a background image. After choosing a new background color, making a few adjustments in the options, and opening more tabs things look very nice using the grid viewing style. Followed by the carousel viewing style. And finally the wall viewing style. You can also set up a top sites page using your favorite viewing style. To add a page to the top sites group right click within the webpage and select Add To Top Sites. Just like that your new selection is added in. Keep in mind that we were not able to move/switch positions in the grid during our tests. Options The extension has plenty of options and settings to help you customize FoxTab to your liking. Conclusion FoxTab adds visually pleasing 3D tab switching to Firefox for anyone who loves eye candy and a touch of fun while browsing. Links Download the FoxTab extension (Mozilla Add-ons) Visit the FoxTab Homepage Similar Articles Productive Geek Tips You Really Want to Completely Disable Tabs in Firefox?Quick Hits: 11 Firefox Tab How-TosQuick Tip: Save Windows and Tabs When Restarting FirefoxMake Firefox Use Multiple Rows of TabsQuick Tip: Use Tab Characters in Textarea Boxes in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • SQL SERVER – Saturday Fun Puzzle with SQL Server DATETIME2 and CAST

    - by pinaldave
    Note: I have used SQL Server 2012 for this small fun experiment. Here is what we are going to do. We will run the script one at time instead of running them all together and try to guess the answer. I am confident that many will get it correct but if you do not get correct, you learn something new. Let us create database and sample table. CREATE DATABASE DB2012 GO USE DB2012 GO CREATE TABLE TableDT (DT1 VARCHAR(100), DT2 DATETIME2, DT1C AS DT1, DT2C AS DT2); INSERT INTO TableDT (DT1, DT2) SELECT GETDATE(), GETDATE() GO There are four columns in the table. The first column DT1 is regular VARCHAR and second DT2 is DATETIME2. Both of the column are been populated with the same data as I have used the function GETDATE(). Now let us do the SELECT statement and get the result from both the columns. Before running the query please guess the answer and write it down on the paper or notepad. Question 1: Guess the resultset SELECT DT1, DT2 FROM TableDT GO Now once again run the select statement on the same table but this time retrieve the computed columns only. Once again I suggest you write down the result on the notepad. Question 2: Guess the resultset SELECT DT1C, DT2C FROM TableDT GO Now here is the best part. Let us use the CAST function over the computed columns. Here I do want you to stop and guess the answer for sure. If you have not done it so far, stop do it, believe me you will like it. Question 3: Guess the resultset SELECT CAST(DT1C AS DATETIME2) CDT1C, CAST(DT2C AS DATETIME2) CDT1C FROM TableDT GO Now let us inspect all the answers together and see how many of you got it correct. Answer 1: Answer 2: Answer 3:  If you have not tried to run the script so far, you can execute all the three of the above script together over here and see the result together. SELECT CAST(DT1C AS DATETIME2) CDT1C, CAST(DT2C AS DATETIME2) CDT1C FROM TableDT GO Here is the Saturday Fun question to you – why do we get same result from both of the expressions in Question 3, where as in question 2 both the expression have different answer. I will publish the valid answer with explanation in future blog posts. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • 2 Servers 1 Database - Can I use Redis?

    - by Aust
    Ok I have a couple of questions here. First let me give you some background information. I'm starting a project where I have a node.js server running my application and my website running on another normal server. My application will allow multiple users simultaneous connections and updates to the database so Redis seemed like a good fit there because of its speed and atomic functions. For someone to access my application they have to login with an account. To get an account, they have to signup for one through my website. So my website needs a database, but its not important to have a database like Redis here because it doesn't need it. Which leads me to my first question: 1. Can Redis even be used without node.js? It seems like it would be convenient if both of my servers were using the same database to keep track of information. In some cases, they will keep track of the same information (as in user information) and in other cases, they will be keeping track of separate information. So even if the website wouldn't be taking full advantage of all that Redis has to offer it seems like it would be more convenient. So assuming Redis could be used in this situation that leads to my next question: 2. Since Redis is linked with JavaScript, how would I handle the security from my website users? What would be stopping my website users from opening firebug or chrome's inspector and making changes to the database? Maybe if I designed my site with the layout like this: apply.php-update.php-home.php. Where after they submitted their form it would redirect them to the update page where the JavaScript would run and then redirect them after the database updated to the home page. I don't really know I'm just taking shots in the dark at this point. :) Maybe a better alternative would be to have my node.js application access its own Redis database and also have access to another MySQL database that my website also has access to. Or maybe there is another database that would be better suited for this situation other than Redis. Anyways any direction on this matter would be greatly appreciated. :)

    Read the article

  • Is inline SQL still classed as bad practice now that we have Micro ORMs?

    - by Grofit
    This is a bit of an open ended question but I wanted some opinions, as I grew up in a world where inline SQL scripts were the norm, then we were all made very aware of SQL injection based issues, and how fragile the sql was when doing string manipulations all over the place. Then came the dawn of the ORM where you were explaining the query to the ORM and letting it generate its own SQL, which in a lot of cases was not optimal but was safe and easy. Another good thing about ORMs or database abstraction layers were that the SQL was generated with its database engine in mind, so I could use Hibernate/Nhibernate with MSSQL, MYSQL and my code never changed it was just a configuration detail. Now fast forward to current day, where Micro ORMs seem to be winning over more developers I was wondering why we have seemingly taken a U-Turn on the whole in-line sql subject. I must admit I do like the idea of no ORM config files and being able to write my query in a more optimal manner but it feels like I am opening myself back up to the old vulnerabilities such as SQL injection and I am also tying myself to one database engine so if I want my software to support multiple database engines I would need to do some more string hackery which seems to then start to make code unreadable and more fragile. (Just before someone mentions it I know you can use parameter based arguments with most micro orms which offers protection in most cases from sql injection) So what are peoples opinions on this sort of thing? I am using Dapper as my Micro ORM in this instance and NHibernate as my regular ORM in this scenario, however most in each field are quite similar. What I term as inline sql is SQL strings within source code. There used to be design debates over SQL strings in source code detracting from the fundamental intent of the logic, which is why statically typed linq style queries became so popular its still just 1 language, but with lets say C# and Sql in one page you have 2 languages intermingled in your raw source code now. Just to clarify, the SQL injection is just one of the known issues with using sql strings, I already mention you can stop this from happening with parameter based queries, however I highlight other issues with having SQL queries ingrained in your source code, such as the lack of DB Vendor abstraction as well as losing any level of compile time error capturing on string based queries, these are all issues which we managed to side step with the dawn of ORMs with their higher level querying functionality, such as HQL or LINQ etc (not all of the issues but most of them). So I am less focused on the individual highlighted issues and more the bigger picture of is it now becoming more acceptable to have SQL strings directly in your source code again, as most Micro ORMs use this mechanism. Here is a similar question which has a few different view points, although is more about the inline sql without the micro orm context: http://stackoverflow.com/questions/5303746/is-inline-sql-hard-coding

    Read the article

  • BAM Data Control in multiple ADF Faces Components

    - by [email protected]
    As we know Oracle BAM data control instance sharing is not supported.When two or more ADF Faces components must display the same data, and are bound to the same Oracle BAM data control definition, we have to make sure that we wrap each ADF Faces component in an ADF task flow, and set the Data Control Scope to isolated. This blog will show a small sample to demonstrate this. In this sample we will create a Pie and Bar using same BAM DC, such that both components use same Data control but have isolated scope.This sample can be downloaded  fromSample1.zip Set-up: Create a BAM data control using employees DO (sample) Steps: Right click on View Controller project and select "New->ADF Task Flow" Check "Create Bounded Task Flow" and give some meaningful name (ex:EmpPieTF.xml ) to the TaskFlow(TF) and click on "OK"CreateTF.bmpFrom the "Components Palette", drag and drop "View" into the task flow diagram. Give a meaningful name to the view. Double Click and Click "Ok" for  "Create New JSF Page Fragment" From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Pie" (Pie: Sales_Number and Slices: Salesperson) Repeat step 1 through 4 for another Task Flow (ex: EmpBarTF). From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Bar" (Bars :Sales_Number and X-axis : Salesperson). Open the Taskflow created in step 2. In the Structure Pane, right click on "Task Flow Definition -EmpPieTF" Click "Insert inside Task Flow Definition - EmpPieTF -> ADF Task Flow -> Data Control Scope". Click "OK"TFDCScope.bmpFor the "Data Control Scope", In the Property Inspector ->General section, change data control scope from Shared to Isolated. Repeat step 8 through 11 for the 2nd Task flow created. Now create a new jspx page example: Main.jspxDrag and drop both the Task flows (ex: "EmpPieTF" and "EmpBarTF") as regions. Surround with panel components as needed.Run the page Main.jspxMainPage.bmpNow when the page runs although both components are created using same Data control the bindings are not shared and each component will have a separate instance of the data control.

    Read the article

  • Zukunftsmusik auf der Oracle OpenWorld 2013

    - by Alliances & Channels Redaktion
    "The future begins at Oracle OpenWorld", das Motto weckt große Erwartungen! Wie die Zukunft aussehen könnte, davon konnten sich 60.000 Besucherinnen und Besucher aus 145 Ländern vor Ort in San Francisco selbst überzeugen: In sage und schreibe 2.555 Sessions – verteilt über Downtown San Francisco – ging es dort um Zukunftstechnologien und neue Entwicklungen. Wie soll man zusammenfassen, was insgesamt 3.599 Speaker, fast die Hälfte übrigens Kunden und Partner, in vier Tagen an technologischen Visionen entwickelt und präsentiert haben? Nehmen wir ein konkretes Beispiel, das in diversen Sessions immer wieder auftauchte: Das „Internet of Things“, sprich „intelligente“ Alltagsgegenstände, deren eingebaute Minicomputer ohne den Umweg über einen PC miteinander kommunizieren und auf äußere Einflüsse reagieren. Für viele ist das heute noch Neuland, doch die Weiterentwicklung des Internet of Things eröffnet für Oracle, wie auch für die Partner, ein spannendes Arbeitsfeld und natürlich auch einen neuen Markt. Die omnipräsenten Fokus-Themen der viertägigen größten Hauskonferenz von Oracle hießen in diesem Jahr Customer Experience und Human Capital Management. Spannend für Partner waren auch die Strategien und die Roadmap von Oracle sowie die Neuigkeiten aus den Bereichen Engineered Systems, Cloud Computing, Business Analytics, Big Data und Customer Experience. Neue Rekorde stellte die Oracle OpenWorld auch im Netz auf: Mehr als 2,1 Millionen Menschen besuchten diese Veranstaltung online und nutzten dabei über 224 Social-Media Kanäle – fast doppelt so viele wie noch vor einem Jahr. Die gute Nachricht: Die Oracle OpenWorld bleibt online, denn es besteht nach wie vor die Möglichkeit, OnDemand-Videos der Keynote- und Session-Highlights anzusehen: Gehen Sie einfach auf Conference Video Highlights  und wählen Sie aus acht Bereichen entweder eine Zusammenfassung oder die vollständige Keynote beziehungsweise Session. Dort finden Sie auch Videos der eigenen Fach-Konferenzen, die im Umfeld der Oracle OpenWorld stattfanden: die JavaOne, die MySQL Connect und der Oracle PartnerNetwork Exchange. Beim Oracle PartnerNetwork Exchange wurden, ganz auf die Fragen und Bedürfnisse der Oracle Partner zugeschnitten, Themen wie Cloud für Partner, Applications, Engineered Systems und Hardware, Big Data, oder Industry Solutions behandelt, und es gab, ganz wichtig, viel Gelegenheit zu Austausch und Vernetzung. Konkret befassten sich dort beispielsweise Sessions mit Cloudanwendungen im Gesundheitsbereich, mit der Erstellung überzeugender Business Cases für Kundengespräche oder mit Mobile und Social Networking. Die aus Deutschland angereisten über 40 Partner trafen sich beim OPN Exchange zu einem anregenden gemeinsamen Abend mit den anderen Teilnehmern. Dass die Oracle OpenWorld auch noch zum sportlichen Highlight werden würde, kam denkbar unerwartet: Zeitgleich mit der Konferenz wurde nämlich in der Bucht von San Francisco die entscheidende 19. Etappe des Americas Cup ausgetragen. Im traditionsreichen Segelwettbewerb lag Team Oracle USA zunächst mit 1:8 zurück, schaffte es aber dennoch, den Sieg vor dem lange Zeit überlegenen Team Neuseeland zu holen und somit den Titel zu verteidigen. Selbstverständlich fand die Oracle OpenWorld auch ein großes Medienecho. Wir haben eine Auswahl für Sie zusammengestellt: - ChannelPartner- Computerwoche - Heise - Silicon über Big Data - Silicon über 12c

    Read the article

  • Zukunftsmusik auf der Oracle OpenWorld 2013

    - by Alliances & Channels Redaktion
    "The future begins at Oracle OpenWorld", das Motto weckt große Erwartungen! Wie die Zukunft aussehen könnte, davon konnten sich 60.000 Besucherinnen und Besucher aus 145 Ländern vor Ort in San Francisco selbst überzeugen: In sage und schreibe 2.555 Sessions – verteilt über Downtown San Francisco – ging es dort um Zukunftstechnologien und neue Entwicklungen. Wie soll man zusammenfassen, was insgesamt 3.599 Speaker, fast die Hälfte übrigens Kunden und Partner, in vier Tagen an technologischen Visionen entwickelt und präsentiert haben? Nehmen wir ein konkretes Beispiel, das in diversen Sessions immer wieder auftauchte: Das „Internet of Things“, sprich „intelligente“ Alltagsgegenstände, deren eingebaute Minicomputer ohne den Umweg über einen PC miteinander kommunizieren und auf äußere Einflüsse reagieren. Für viele ist das heute noch Neuland, doch die Weiterentwicklung des Internet of Things eröffnet für Oracle, wie auch für die Partner, ein spannendes Arbeitsfeld und natürlich auch einen neuen Markt. Die omnipräsenten Fokus-Themen der viertägigen größten Hauskonferenz von Oracle hießen in diesem Jahr Customer Experience und Human Capital Management. Spannend für Partner waren auch die Strategien und die Roadmap von Oracle sowie die Neuigkeiten aus den Bereichen Engineered Systems, Cloud Computing, Business Analytics, Big Data und Customer Experience. Neue Rekorde stellte die Oracle OpenWorld auch im Netz auf: Mehr als 2,1 Millionen Menschen besuchten diese Veranstaltung online und nutzten dabei über 224 Social-Media Kanäle – fast doppelt so viele wie noch vor einem Jahr. Die gute Nachricht: Die Oracle OpenWorld bleibt online, denn es besteht nach wie vor die Möglichkeit, OnDemand-Videos der Keynote- und Session-Highlights anzusehen: Gehen Sie einfach auf Conference Video Highlights und wählen Sie aus acht Bereichen entweder eine Zusammenfassung oder die vollständige Keynote beziehungsweise Session. Dort finden Sie auch Videos der eigenen Fach-Konferenzen, die im Umfeld der Oracle OpenWorld stattfanden: die JavaOne, die MySQL Connect und der Oracle PartnerNetwork Exchange. Beim Oracle PartnerNetwork Exchange wurden, ganz auf die Fragen und Bedürfnisse der Oracle Partner zugeschnitten, Themen wie Cloud für Partner, Applications, Engineered Systems und Hardware, Big Data, oder Industry Solutions behandelt, und es gab, ganz wichtig, viel Gelegenheit zu Austausch und Vernetzung. Konkret befassten sich dort beispielsweise Sessions mit Cloudanwendungen im Gesundheitsbereich, mit der Erstellung überzeugender Business Cases für Kundengespräche oder mit Mobile und Social Networking. Die aus Deutschland angereisten über 40 Partner trafen sich beim OPN Exchange zu einem anregenden gemeinsamen Abend mit den anderen Teilnehmern. Dass die Oracle OpenWorld auch noch zum sportlichen Highlight werden würde, kam denkbar unerwartet: Zeitgleich mit der Konferenz wurde nämlich in der Bucht von San Francisco die entscheidende 19. Etappe des Americas Cup ausgetragen. Im traditionsreichen Segelwettbewerb lag Team Oracle USA zunächst mit 1:8 zurück, schaffte es aber dennoch, den Sieg vor dem lange Zeit überlegenen Team Neuseeland zu holen und somit den Titel zu verteidigen. Selbstverständlich fand die Oracle OpenWorld auch ein großes Medienecho. Wir haben eine Auswahl für Sie zusammengestellt: - ChannelPartner- Computerwoche - Heise - Silicon über Big Data - Silicon über 12c

    Read the article

  • SQL SERVER – Difference Between CURRENT_TIMESTAMP and GETDATE() – CURRENT_TIMESTAMP Equivalent in SQL Server

    - by pinaldave
    A common question – I often get from Oracle/MySQL Professionals: “What is the Equivalent to CURRENT_TIMESTAMP in SQL Server?” Here is a common question I often get from SQL Server Professionals: “What are differences between Difference Between CURRENT_TIMESTAMP and GETDATE ()?” Very simple question but have showed up so frequently that I feel like to write about it. Well in SQL Server GETDATE() is Equivalent to CURRENT_TIMESTAMP. However, if you use CURRENT_TIMESTAMP in your select statement it will work fine. You can see in the above example – both of them returns the same value. Now let us go to next question regarding difference between GETDATE and CURRENT_TIMESTAMP. Well, the matter of the fact, there is no difference between them in SQL Server (Reference Link). CURRENT_TIMESTAMP is an ANSI SQL function, whereas GETDATE is T-SQL implementation of the same function. Both of them derive value from the operating system of the computer on which SQL Server instance is running. Above discussion prompts another question – in this case, what should one use GETDATE or CURRENT_TIMESTAMP? Well, this is indeed tricky and interesting question. I think I am very comfortable using the GETDATE () so I will go to use it but a matter of the fact there is no right or wrong answer. If you want to follow ancient saying “When in Rome, do as the Romans do”, I suggest using the GETDATE (), or continue using CURRENT_TIMESTAMP. With that said, there is one very important property we all need to keep in mind. If you use CURRENT_TIMESTAMP while creating an object, they are automatically converted to GETDATE() and stored internally. To illustrate what I am suggesting here is the example - Create a table using the following script CREATE TABLE [dbo].[TestTable]( [Cold2] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[TestTable] ADD DEFAULT (CURRENT_TIMESTAMP) FOR [Cold2] GO Now go to SSMS and generate the script for the table and you will notice following syntax. CREATE TABLE [dbo].[TestTable]( [Cold2] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[TestTable] ADD DEFAULT (GETDATE()) FOR [Cold2] GO You can notice that SQL Server have automatically converted CURRENT_TIMESTAMP to GETDATE(). I guess this gives us an idea how they behave. Now go ahead and make your choice! Do let me know which one will you use CURRENT_TIMESTAMP or GETDATE () in the comments area. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 721 722 723 724 725 726 727 728 729 730 731 732  | Next Page >