Search Results

Search found 2670 results on 107 pages for 'dave oliver'.

Page 94/107 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Grow Your Oracle Exadata and Manageability Business: Engage With Us to Find Out How

    - by swalker
    Don't miss out on the first EMEA Partner Community Cast! If you are a business decision maker, project leader, technical leader or business development manager you will gain incredible value from these events, and we believe that this introduction to Oracle Partner Communities will bring you a wealth of new opportunities. Join Us on December 7th, 10:00 GMT (11:00 CET) for the first broadcast the Exadata and Manageability solution areas. In just 30 minutes, you will find out more about Oracle's Exadata, Manageability and Oracle Enterprise Manager 12c solutions, and the value they can generate for you and your customers. See the full agenda here. Hosted by Paul Thompson, Senior Director, Alliances and Solutions Partner Programs, Oracle EMEA and Javier Puerta, Director, Core Technology Partner Programs, Oracle EMEA, our special guests include: Steve McNickle, Vice President Europe, cVidya Dave Sanderson, Associate Partner, Technology Reply Patrick Rood, Lead for Indirect Manageability Business, Oracle EMEA Register Now Partner Community Casts are a new series of interactive broadcasts designed to help you truly engage with Oracle on an individual level, build expertise around your specialist solution area and make valuable new contacts in Oracle and other Oracle partners. Community Casts can be viewed live from our online platform. Audience members have the opportunity to submit questions during the show via chat or social media outlets, many of which are answered on-air. Learn more about EMEA Partner Community Casts Register Now to learn how participation in the Exadata and Manageability Partner Communities will help your business flourish!

    Read the article

  • Efficient mapping layout in 2D side-scroller, and collisions between character and the world

    - by Jack
    I haven't touched Visual Studio for a couple months now, but I was playing a game from the '90s toady and had an epiphany: I was looking for something what i didn't need, and I wasn't using what I knew correctly. One of those realizations was collision, so let me tell you a bit about my project that I was working on. The project's graphics looks like Mario or Dangerous Dave, etc., you get the idea - old-school pixels. So anyway I remember trying to think of something else than AABB for character form, but I couldn't think of anything. Perhaps I could get a suggestion for this? Another thing is the world - I don't want it to be just linear world, I want mountains, etc.. My idea is to use triangles, and no idea yet what to do if I want just part of the cube, say 3/4 or 2/4 or whatever. Hard-coding such things seems inefficient. P.S. I am not looking at the precision level offered by Box2D. Actually I remember trying to implement it at first, but I failed as my understanding of C++ wasn't advanced enough, as it'll be mentioned below. P.P.S. I am programming in C++, and I haven't done it for a couple months now. I have no means of testing it either, as my PC is broken down, and this one can barely run games from late '90s, not to speak about a compiler or a program with inefficient resource management... I am also not an expert (obviously), I don't even know if I can consider myself an average programmer. In short, I am simply curious about my thoughts and my past experience when programming the game. I may come back to it when my PC is fixed, I'm already filling a note about these things.

    Read the article

  • SQL SERVER – Get Schema Name from Object ID using OBJECT_SCHEMA_NAME

    - by pinaldave
    Sometime a simple solution have even simpler solutions but we often do not practice it as we do not see value in it or find it useful. Well, today’s blog post is also about something which I have seen not practiced much in codes. We are so much comfortable with alternative usage that we do not feel like switching how we query the data. I was going over forums and I noticed that at one place user has used following code to get Schema Name from ObjectID. USE AdventureWorks2012 GO SELECT s.name AS SchemaName, t.name AS TableName, s.schema_id, t.OBJECT_ID FROM sys.Tables t INNER JOIN sys.schemas s ON s.schema_id = t.schema_id WHERE t.name = OBJECT_NAME(46623209) GO Before I continue let me say I do not see anything wrong with this script. It is just fine and one of the way to get SchemaName from Object ID. However, I have been using function OBJECT_SCHEMA_NAME to get the schema name. If I have to write the same code from the beginning I would have written the same code as following. SELECT OBJECT_SCHEMA_NAME(46623209) AS SchemaName, t.name AS TableName, t.schema_id, t.OBJECT_ID FROM sys.tables t WHERE t.name = OBJECT_NAME(46623209) GO Now, both of the above code give you exact same result. If you remove the WHERE condition it will give you information of all the tables of the database. Now the question is which one is better – honestly – it is not about one is better than other. Use the one which you prefer to use. I prefer to use second one as it requires less typing. Let me ask you the same question to you – which method to get schema name do yo use? and Why? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Podcast Show Notes: Conversations in the Cloud

    - by Bob Rhubart
    The centerpiece of every OTN Architect Day event is a panel discussion the gathers all of the session speakers togehter to respond to questions from the audience. I generally try to record these discussions, usually by stiking my iPad on top of one of the PA speakers, with mixed results. Fortunately, the A/V tech at the venue for the Los Angeles event, held on October 25, 2012, had the necessary gear to get a good-quality recording of the panel discussion. So starting this week the OTN ArchBeat Podcast will feature a short series of highlights from those discussions. Listen to Part 1: Dude, What's My Role? Members of the Architect Day panel respond to an audience question about what happens to traditional IT roles in a cloud environment. Listen to Part 2: Migrating Mission-Critical Applications to the Cloud (Nov 21) The panel offers advice and examples in response to an audience question about dealing with mission-critical applications. Listen to Part 3: All Clouds Are Not Equal (Nov 28) The panel responds to a challenging question about cloud strategy with a discussion of enterprise-grade cloud services. Listen to Part 4: Cloud Security and Auditing (Dec 5) The last segment in the series is short discussion in response to an audience question about auditing and security in the cloud. The Panelists (Listed alphabetically) Ashok Aletty, Senior Director of Product Management, Oracle Cloud Application Foundation Dr. James Baty, Vice President, Oracle Global Enterprise Architecture Program Dave Chappelle, Enterprise Architect, Oracle Global Enterprise Architecture Program Jeff Davies, Senior Principal Product Manager, Oracle Corporation Anbu Krishnaswamy, Enterprise Architect, Oracle Global Enterprise Architecture Program Dhanraj Pondicherry, Sales Consulting Manager, Oracle Exadata Perren Walker, Senior Principal Product Manager, Oracle Enterprise Manager Coming Soon Upcoming programs will focus on DevOps and Continuous Integration, and on Oracle's Java Cloud and Developer Cloud services. Stay tuned: RSS

    Read the article

  • SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet

    - by pinaldave
    No! It is not a SQL Server Issue or SSMS issue. It is how things work. There is a simple trick to resolve this issue. It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Now as Zero which are training any digit after decimal points have no value, Excel automatically hides it. To prevent this to happen user has to convert columns to text format so it can preserve the formatting. Here is how you can do it. Select the corner between A and 1 and Right Click on it. It will select complete spreadsheet. If you want to change the format of any column you can select an individual column the same way. In the menu Click on Format Cells… It will bring up the following menu. Here by default the selected column will be General, change that to Text. It will change the format of all the cells to Text. Now once again paste the values from SSMS to the Excel. This time it will preserve the decimal values from SSMS. Solved! Any other trick you do you know to preserve the decimal values? Leave a comment please. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Excel

    Read the article

  • Best of OTN - Week of May 25th

    - by CassandraClark-OTN
    Architect Community Podcast: Going Mobile - Developing Enterprise Mobile Apps This four-part OTN ArchBeat Podcast series is devoted to a discussion about bringing mobility to the enterprise, and how architects and developers can take advantage of the opportunities in the evolution of mobile application development. Video: Data Modeling and Moving Meditation with Kent Graziano Want to learn more about Kent's Kscope 2014 data modeling sessions and how Chi Gung can help you get a great start on your day? Check out this video interview. Video: Oracle ACE Director Stewart Bryson on OBIEE, ODI, GoldenGate In this interview Stewart talks about how OBIEE, ODI, GoldenGate and other technologies fit into his Kscope 2014 sessions, and about the sessions he plans to attend. Friday Funny from OTN Architect Community Manager Bob Rhubart:Even if you're not a person of a certain age, you need read A journey into my colon -- and yours, humorist Dave Barry's wildly funny 2008 account of his colonoscopy. Because one day you will be a person of certain age... Get involved in community conversations on the following OTN channels... OTN TechBlog The Java Source Blog The OTN Garage Blog The OTN ArchBeat Blog @oracleotn @java @OTN_Garage @OTNArchBeat @OracleDBDev OTN I Love Java OTN Garage OTN ArchBeat Oracle DB Dev OTN Java OTN ArchBeat

    Read the article

  • Architectural and Design Challenges with SOA

    With all of the hype about service oriented architecture (SOA) primarily through the use of web services, not much has been said about potential issues of using SOA in the design of an application. I am personally a fan of SOA, but it is not the solution for every application. Proper evaluation should be done on all requirements and use cases prior to deciding to go down the SOA road. It is important to consider how your application/service will handle the following perils as it executes. Example Challenges of SOA Network Connectivity Issues Handling Connectivity Issues Longer Processing/Transaction Times How many of us have had issues visiting our favorite web sites from time to time? The same issue will occur when using service based architecture especially if it is implemented using web services. Forcing applications to access services via a network connection introduces a lot of new failure points to the application. Potential failure points include: DNS issues, network hardware issues, remote server issues, and the lack of physical network connections. When network connectivity issues do occur, how are the service clients are implemented is very important. Should the client wait and poll the service until it is accessible again? If so what is the maximum wait time or number of attempts it should retry. Due to the fact of services being distributed across a network automatically increase the responsiveness of client applications due to the fact that processing time must now also include time to send and receive messages from called services. This could add nanoseconds to minutes per each request based on network load and server usage of the service provider. If speed highly desirable quality attribute then I would consider creating components that are hosted where the client application is located. References: Rader, Dave. (2002). Overcoming Web Services Challenges with Smart Design: http://soa.sys-con.com/node/39458

    Read the article

  • SQLAuthority News – Download Whitepaper – Power View Infrastructure Configuration and Installation: Step-by-Step and Scripts

    - by pinaldave
    Power View, a feature of SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Server 2010 Enterprise Edition, is an interactive data exploration, visualization, and presentation experience. It provides intuitive ad-hoc reporting for business users such as data analysts, business decision makers, and information workers. Microsoft has recently released very interesting whitepaper which covers a sample scenario that validates the connectivity of the Power View reports to both PowerPivot workbooks and tabular models. This white paper talks about following important concepts about Power View: Understanding the hardware and software requirements and their download locations Installing and configuring the required infrastructure when Power View and its data models are on the same computer and on different computer Installing and configuring a computer used for client access to Power View reports, models, Sharepoint 2012 and Power View in a workgroup Configuring single sign-on access for double-hop scenarios with and without Kerberos You can download the whitepaper from here. This whitepaper talks about many interesting scenarios. It would be really interesting to know if you are using Power View in your production environment. If yes, would you please share your experience over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • PPF Savings Interest Rate Increased To 8.6% From 8% [India]

    - by Gopinath
    Here is some good news to small Indian investors who save money in Public Provident Fund(PPF) accounts operated in Post Offices and few nationalized banks – returns of your PPF savings are going to increase. Indian government has decided to increase the rate of interest paid to customers from 8% to 8.6%. To put it in numbers, if you have a 2,00,000 of savings in PPF account they are going give returns of 17,200/- per annum compared to 16,000/- returns at 8%. Also the the maximum cap on the investments in to PPF account per annum is increased to 1,00,000 from 70,000/-. PPF is one of the safest debt investments that gives very decent returns, but if you are a salaried employee with PF account then consider investing in Voluntary PF(VPF) instead of PPF as VPF returns are higher than PPF. CC image credit: Dave Dugdale. This article titled,PPF Savings Interest Rate Increased To 8.6% From 8% [India], was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • SQL SERVER – Using RAND() in User Defined Functions (UDF)

    - by pinaldave
    Here is the question I received in email. “Pinal, I am writing a function where we need to generate random password. While writing T-SQL I faced following issue. Everytime I tried to use RAND() function in my User Defined Function I am getting following error: Msg 443, Level 16, State 1, Procedure RandFn, Line 7 Invalid use of a side-effecting operator ‘rand’ within a function. Here is the simplified T-SQL code of the function which I am using: CREATE FUNCTION RandFn() RETURNS INT AS BEGIN DECLARE @rndValue INT SET @rndValue = RAND() RETURN @rndValue END GO I must use UDF so is there any workaround to use RAND function in UDF.” Here is the workaround how RAND() can be used in UDF. The scope of the blog post is not to discuss the advantages or disadvantages of the function or random function here but just to show how RAND() function can be used in UDF. RAND() function is directly not allowed to use in the UDF so we have to find alternate way to use the same function. This can be achieved by creating a VIEW which is using RAND() function and use the same VIEW in the UDF. Here is the step by step instructions. Create a VIEW using RAND function. CREATE VIEW rndView AS SELECT RAND() rndResult GO Create a UDF using the same VIEW. CREATE FUNCTION RandFn() RETURNS DECIMAL(18,18) AS BEGIN DECLARE @rndValue DECIMAL(18,18) SELECT @rndValue = rndResult FROM rndView RETURN @rndValue END GO Now execute the UDF and it will just work fine and return random result. SELECT dbo.RandFn() GO In T-SQL world, I have noticed that there are more than one solution to every problem. Is there any better solution to this question? Please post that question as a comment and I will include it with due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: technology

    Read the article

  • SQL SERVER – Caption the Cartoon Contest – Last 2 Days

    - by pinaldave
    Developer’s life is very interesting, we often want to start my day early at a job so we can go home early. However, the day never comes as the life of the developer is always about working late hours. If the developer goes to the office early – there are good chances that his co-workers will come late. Additionally, I am confident that there will be always something urgent for developers or DBA to solve right at the time they are ready to go home. This is the life of the developers!  Here is the interesting story of a DBA who was about to go to the home. He had to take his girlfriend to a movie and dinner in 30 minutes. However, his manager asks him to fix the performance related issues with their production server. In normal case, he had only two choices a) Job or b) Girlfriend. Well, our super hero DBA decided to use efficient tools and improve the performance of the production server in merely 30 minutes. When he was done, his manager was absolutely surprised by his efficiency and accuracy of the work. He asked him following question - Here is the contest – you need to guess what was the answer of our Super Hero DBA. If you guess the answer correct you may win Star Wars R2-D2 Inflatable Remote Controlled device. Additionally, if you Download DB Optimizer before Dec 8, 2012 – you will be eligible for USD 25 Amazon Gift Card (there are total 10 such awards). Please do not leave comments in this thread – to participate in the contest – please leave a comment here in the original contest page. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Fix Visual Studio Error : Connections to SQL Server files (.mdf) require SQL Server Express 2005 to function properly. Please verify the installation of the component or download from the URL

    - by pinaldave
    In one of the virtual environment while I was trying to add SQL Server Database (.mdf) file to asp.net project I encountered following error: Connections to SQL Server files (.mdf) require SQL Server Express 2005 to function properly. Please verify the installation of the component or download from the URL:  For a long time I am using SQL Server 2012 but this error was a bit interesting to me. I realize that there should not be any need of the SQL Server 2005 installation. I quickly figured out that I can remove this error if I do as mentioned below: Open Microsoft Visual Studio Select Tools >> Options >> Database Tools >> Data Connections Enter the name of an installed instance in “SQL Server Instance Name” field. Click OK If you do not know the instance name, you can follow either of the options. 1) Use the command line sqlcmd utility 2) Using SQL Server Management Studio Is there any other way to resolve this error? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: sqlcmd, Visual Studio

    Read the article

  • Mouse doesn't work & internet connection not made in Ubuntu 12.04 LTS

    - by David Skare
    Yesterday, Nov 15, 2012, I booted into my Ubuntu 12.04 LTS system. It has resided on a Crucial 128 GB SSD with about 90% free space since early summer. I also have Windows 7 loaded on another Crucial 256 GB SSD. Ubuntu has set up a dual boot system for me even though each OS has its own SSD. I have been using this setup without problems since summer. Yesterday, when the boot process finished, my Microsoft Comfort Mouse 3000 did not work and there was a message that Ubuntu was not connected to the internet. So w/o the mouse I was forced to turn the machine off manually. About 4 days ago Ubuntu worked fine and booting into Win 7 also works fine. I have a backup machine with the same style mouse on it so I swapped the mouse onto this system. Same results. But both mice work when booting into Win 7. Today I removed both SSDs and installed my Ubuntu 12.04 HD which has not been used since I moved Ubuntu to the SSD from it. Same results. Between the last time I used Ubuntu 12.04 on the SSD and when I tried to use it again I made no changes to my machine, either hardware or software. My machines specs are: AMD FX-6100, MSI 990FXA-GD65 AM3+ format with latest BIOS (Ver 19.9), Corsair Vengeance 1866 MHz memory - 16 GB (4GB X 4 sticks), MSI N580GTX video card (nVidia 306.97 drivers), Sony Bravia 32" HD TV as a monitor, Pioneer BluRay DVD-RW, DSL connection to internet thru a router (10 mps), Crucial 128 GB SSD (90% free space), Microsoft Comfort Mouse 3000 I try to maintain current BIOS and drivers for all devices. I mostly use my Ubuntu system for programming in GCC and OpenCOBOL, surfing the internet and e-mailing. No games are installed. I'm stumped! If anyone has experienced this same problem I'd appreciate knowing how you solved it. TIA, Dave

    Read the article

  • Java Management Extensions with Oracle WebLogic Server 12c–Webcast Nocember 13th 2012

    - by JuergenKress
    Date: Tuesday, November 13, 2012 Time: 10:00 AM PST You’re responsible for evaluating technologies to monitor and configure Oracle WebLogic Server. This Webcast will help you get a complete picture of what Oracle WebLogic Server 12c with Java Management Extensions (JMX) can do for you. Dr. Frank Munz will explain the development of JMX with Spring and compare it to Java EE. A new feature of Oracle WebLogic Server 12c, the RESTful Management API, will also be examined. Learn how JMX in Oracle WebLogic Server 12c is: Highly efficient. It uses WebLogic Scripting Tool (WLST) instead of a client JMX program written in Java, resulting in little overhead. Effective. It bundles optimized tools such as WLST and WebLogic Diagnostic Framework to eliminate the requirement for Java programming on the client side. Compliant. It is fully standard-compliant but also works with open source clients and frameworks. Register for the Webcast today. Speakers: Dr. Frank Munz, Oracle Technologist of the Year Dave Cabelus, Senior Principal Product Manager, Oracle WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Java,Frank Munz,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • ArchBeat Link-o-Rama for December 4, 2012

    - by Bob Rhubart
    Exalogic 2.0.1 Tea Break Snippets - Creating and using Distribution Groups | The Old Toxophilist "Although in many cases we, as Cloud Users, may not be to worried how the Virtualisation Algorithm decides where to place our vServers," says The Old Toxopholist, "there are cases where it is extremely important that vServers run on distinct physical compute nodes." There's plenty more on the subject in his blog post. Oracle Endeca (2.3) Record Level Security | Adam Seed Adam Sneed's blog post covers "the basics of security within Endeca Information Discovery, as these basic security objects are required in order to explain the implementation of record level security." ODI Handling DQ | Gurcan Orhan Oracle ACE Director Gurcan Orhan suggests you have fun with these scripts for Oracle Data Integrator. Parleys Testimonial at GlassFish Community Event - JavaOne 2012 Video of Parley's webmaster Stephan Janssen's presentation at the GlassFish Community Event at JavaOne 2012, in which he explains why Parley's moved from Tomcat to GlassFish. Java Spotlight Episode 109: Pete Muir on CDI 1.1 This edition of Roger Brinkley's Java Spotlight Podcast features an interview with CDI 1.1 spec lead Pete Muir of JBoss/Red Hat. Muir talks about the features in CDI 1.1 and what to expect in the future. Webcast: Java Management Extensions with Oracle WebLogic Server 12c Dr. Frank Munz and Dave Cabelus do the talking in this on-demand webcast focused on Oracle WebLogic Server 12c with Java Management Extensions (JMX). Using the Coherence API to get Portable Object Format bytes | Bruno Borges Bruno Borges shares a code snippet that illustrates how easy it is to use the Coherence API. Thought for the Day "Experience is something you don't get until just after you need it." — Anonymous Source: SoftwareQuotes.com

    Read the article

  • Pidgin XML

    - by David Totzke
    I'm looking at an xml document that gets passed to a COM object (yes, I said the "C" word) to save a new record.  You can tell by the "new|" at the top of the file before the xml declaration.  If we were saving, there would be "edit|" at the top.  Couldn't you just have a root element with something like: <myRootElement mode="new"> Ah, here's why that won't work... There's no single root element but that's ok because next we find that this document is actually several documents.  <?xml version="1.0"?> appears several times.  The final document opens with <myElementStart> and closes with <myElementEnd> so it's not even well-formed. This isn't a style thing.  This is broken.  I mean, basic well-formed XML only has two rules; three if you count the xml declaration but it works as a document for DTO purposes without it. One root element. Close all elements with a matching tag. As a result, both ends of this conversation need to speak the same dialect of broken XML in order to communicate.  To join the conversation, you must also learn pidgin XML. How can you start out so right - XML being the obvious choice in this instance - and then go so horribly wrong? Dave Just because I can…

    Read the article

  • SQL SERVER – FIX ERROR – Cannot connect to . Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452)

    - by pinaldave
    Just a day ago, I was doing small attempting to connect to my local SQL Server using IP 127.0.0.1. The IP is of my local machine and SQL Server is installed on the local box as well. However, whenever I try to connect to the server it gave me following strange error. Cannot connect to 127.0.0.1. Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452) The reason was indeed strange as I was trying to connect from local box to local box and it said my login was from an untrusted domain. As my system is not part of any domain, this was really confusing to me. Another thing was that I have been always able to connect always using 127.0.0.1 to SQL Server and this was a bit strange to me. I started to think what did I change since it  last time I connected to SQL Server. Suddenly I remembered that I had modified my computer’s host file for some other purpose. Solution: I opened my host file and immediately added entry like 127.0.0.1 localhost. Once I added it I was able to reconnect to SQL Server as usual. The location of the host file is C:\Windows\System32\drivers\etc. You will find file with the name hosts in it, make sure to open it with notepad. If you are part of a domain and your organization is using active directory, make sure that your account is added properly to active directory as well have proper security permissions to execute the task. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Installing Ubuntu on Asus G75VW (UEFI)

    - by user101653
    You all are my last hope... help! I bought an Asus G75VW from Best Buy. It has the new UEFI BIOS instead of the old style BIOS (1980's) and has Windows 8 preinstalled. I cannot get the G75VW to install Ubuntu 12.10 in EFI mode. I did get Ubuntu to load if I changed the BIOS to CSM and the computer sees and installs Ubuntu in "legacy mode". I attempted boot repair, and Ubuntu will load after 1 minute but as legacy BIOS only. If I changed the BIOS to UEFI "Binary is whitelisted" is displayed and I get a purple screen. My goal... keep my preinstalled Windows 8 on internal drive bay 1 and install Ubuntu 12.10 on internal drive bay 2... and somehow make a choice on which to choose. I am at a loss. I am a software programmer, but I am very bad at understanding BIOS and partitioning. Any ideas? Has anyone done what I want to do. This is a full second day on my "issue"! If I cannot get Ubuntu installed, I'm returning the laptop. And "wait" until these obstacles of UEFI/EFI and properly handled to allow people to load EFI based Ubuntu without a hitch. Thanks, Dave

    Read the article

  • Microsoft Office 2003 document (Excel and Word) intermitently, takes 30 seconds to load

    - by Julio Nobre
    I am trying to figure out why a simple .XLS EXCEL workbook is taking, randomly, 30 seconds to open. Before answering: Please, bear mind the following: Problem symptoms Hanging is intermitent and it takes exactly 30 seconds; During hanging there is no cpu or disk activity; It only happens during document load. Every runs smooth after that; Windows Explorer.exe hangs on folder, but all other folders, system and applications are still responsive; There are no consecutive hangings. I have to wait for while to reproduce this behaviour; All samples documents are located on a local drive (C:\BPI); The no document has has macros and have any addins usage; The problem does occurs on others files extensions like .PDF, for example; Office 2003 is being used for several years; The computer is running Windows XP; Computer has several network mapped drives, all addressed to main file server; Recently, main fileserver was replaced by Windows 2011 SBS Standard Edition What I have done so far I have traced machine Explorer.exe, using Process Monitor, added Duration column, and filtered by Duration 1. That's is how I found that hanging was taking exactly 30 seconds. For further information, please refer to Oliver Salzburg tutorial. Using Process Monitor, I have also figured out than five operations were taking most of sample collecting duration. Looking at sample image below, column Operation below you will notice that one single operation was taking 29 seconds; I have tried different documents (.xls and .doc), all of them smaller than 30 KB; I have, temporarily, removed all shortcuts on User Document's folder that were pointing to network drives or shares; I have runned CCleaner to fix registry issues; I made sure that there were no external links on tested workbook or word documents; I have reproduced this behaviour for hours; I have extensivelly researched for hours on the web; Process Monitor's collected and filtered data

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • WPF Binding to DataRow Columns

    - by Trindaz
    Hi, I've taken some sample code from http://sweux.com/blogs/smoura/index.php/wpf/2009/06/15/wpf-toolkit-datagrid-part-iv-templatecolumns-and-row-grouping/ that provides grouping of data in a WPF DataGrid. I'm modifying the example to use a DataTable instead of a Collection of entities. My problem is in translating a binding declaration {Binding Parent.IsExpanded}, which works fine where Parent is a reference to an entity that has the IsExpanded attribute, to something that will work for my weakly typed DataTable, where Parent is the name of a column and references another DataRow in the same DataTable. I've tried declarations like {Binding Parent.Items[IsExpanded]} and {Binding Parent("IsExpanded")} but none of these seem to work. How can I create a binding to the IsExpanded column of the DataRow Parent in my DataTable? Thanks in advance, Dave

    Read the article

  • Getting real-time market/stock quotes in C#/Java

    - by David Menard
    Hey, I would like to make an program that acts like a big filter for stocks. To do so, I need to have real-time (or delayed) quotes from the market. I started getting stock quotes by requesting pages from yahoo, accordingand parsing the html to the ticker, and parsing the html. I was wondering how to do this requesting and parsing html. Is there some way I can request only the stock quotes and its info? I know some applications do this, and I am very curious how they do it, because requesting web pages and parsing them is very time-consuming. Thanks, Dave

    Read the article

  • Differences & Similarities Between Programming Paradigms

    - by DaveDev
    Hi Guys I've been working as a developer for the past 4 years, with the 4 years previous to that studying software development in college. In my 4 years in the industry I've done some work in VB6 (which was a joke), but most of it has been in C#/ASP.NET. During this time, I've moved from an "object-aware" procedural paradigm to an object-oriented paradigm. Lately I've been curious about other programming paradigms out there, so I thought I'd ask other developers their opinions on the similarities & differences between these paradigms, specifically to OOP? In OOP, I find that there's a strong focus on the relationships and logical interactions between concepts. What are the mind frames you have to be in for the other paradigms? Thanks Dave

    Read the article

  • How Do You Insert Large Blobs Into Oracle 10G Using System.Data.OracleClient?

    - by discwiz
    Trying to insert 315K Gif files into an Oracle 10g database. Everytime I get this error "ora-01460: unimplemented or unreasonable conversion requested" whe I run the stored procedure. It appears that there is a 32K limit if I use a stored procedure. I read online that this does not apply if you are doing a direct insert, but I do not know how to create the insert string for a Byte Array. This is a thick client running on the server so not worried about SQL Injection attacks. Any help would be greatly appreciated. FYI, code in vb.net. Thanks, Dave

    Read the article

  • SQL Server: SELECT rows with MAX(Column A), MAX(Column B), DISTINCT by related columns

    - by z531
    Scenario: Table A MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/1/2010, 'Fred', null, null 2, 1/2/2010, 'Barney', 'Mr. Slate', 1/7/2010 3, 1/3/2010, 'Noname', null, null Table B MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/3/2010, 'Wilma', 'The Great Kazoo', 1/5/2010 2, 1/4/2010, 'Betty', 'Dino', 1/4/2010 Table C MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/5/2010, 'Pebbles', null, null 2, 1/6/2010, 'BamBam', null, null Table D MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/2/2010, 'Noname', null, null 3, 1/4/2010, 'Wilma', null, null I need to return the max added date and corresponding user, and max updated date and corresponding user for each distinct record when tables A,B,C&D are UNION'ed, i.e.: 1, 1/5/2010, 'Pebbles', 'The Great Kazoo', 1/5/2010 2, 1/6/2010, 'BamBam', 'Mr. Slate', 1/7/2010 3, 1/4/2010, 'Wilma', null, null I know how to do this with one date/user per row, but with two is beyond me. DBMS is SQL Server 2005. T-SQL solution preferred. Thanks in advance, Dave

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >