Search Results

Search found 30046 results on 1202 pages for 'document load'.

Page 395/1202 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • BIWA Wednesday TechCast Series - Opposition to Data Warehouse Initiatives

    - by jenny.gelhausen
    BIWA Wednesday TechCast Series - 19th Event! Opposition to Data Warehouse Initiatives Please join us for this webcast on Wednesday, March 24, 12 noon Eastern or check your local area's time Webcast is open to clients, prospects and partners. No matter how good your technology and technical skills, organizational issues can derail a data warehousing or BI project. Therefore BIWA presents a vital topic that crosses product boundaries: organizational resistance to data warehouse initiatives - how to recognize it and what to do about it. Many a DW/BI professional has been surprised by organizational resistance to DW/BI initiatives. Yet real organizational imperatives may be behind this apparently irrational behavior. Based on in-depth interviews with IT professionals, industry consultants, and power users, our speaker Bruce Jenks will present his research findings about what drives organizational resistance to data warehouse initiatives. The talk will cover specific behaviors that can signal organizational resistance to a data warehouse program and what organizations have done to address such resistance. Presenter: Bruce Jenks of Dun and Bradstreet Bruce Jenks has over 20 years experience in data warehousing and business intelligence, much of it as a consultant to large organizations spanning the US. Bruce's data warehousing clients have included firms such as Sprint, Gallo Wines, Southern California Edison, The Gap, and Safeway. He started his data warehousing career at Metaphor Computers, a pioneering DW/BI firm from which a number of industry luminaries sprang including Ralph Kimball (author of The Data Warehouse Toolkit ). Bruce continued his data warehousing career at HP, Stanford University and other firms. Bruce is currently completing his doctorate in business administration at Golden Gate University, and today's material arises from his doctoral research. He is also a principal consultant for Dun and Bradstreet. Audio Dial-In: 866 682 4770 Audio Meeting ID: 1683901 Audio Meeting Passcode: 334451 Web Conference: Please register at https://www1.gotomeeting.com/register/807185273 After you register you will be provided with a link to the TechCast. Invitation to Speakers: All BIWA members and Oracle professionals (experts, end users, managers, DBAs, developers, data analysts, ISVs, partners, etc.) may submit abstracts for 45 minute technical webcasts to our Oracle BIWA (IOUG SIG) Community. Submit your BIWA TechCast abstract today! BIWA is a worldwide forum with over 2000 members who are business intelligence, warehousing and analytics professionals. BIWA presents information, experiences and best practices in successfully deploying Oracle Database-centric BI, Data Warehousing, and Analytics products, features and Options--the Oracle Database "BIWA" platform. Attendance Information & Replays at the BIWA website: oraclebiwa.org var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • AS11 Oracle B2B Sync Support - Series 1

    - by sinkarbabu.kirubanithi
    Synchronous message support has been enabled in Oracle B2B 11G. This would help customers to send the business message and receive the corresponding business response synchronously. We would like to keep this blog entry as three part series, first one would carry Oracle B2B configuration related details followed by 'how it can be consumed and utilized in an enterprise' using composites backed model. And, the last one would talk about more sophisticated seeded support built on Oracle B2B platform (Note: the last one is still in description phase and ETA hasn't been finalized yet). Details: In an effort to enable synchronous processing in Oracle B2B, we provided a platform using the existing 'callout' mechanism. In this case, we expect the 'callout' attached to the agreement to deliver incoming business message (inbound) to back-end application and get the corresponding business response from back-end and deliver it to Oracle B2B as its output. The output of 'callout' would be processed as outbound message and the same will be attached as a response for the inbound message. Requirements to enable Sync Support: Outbound side: Outbound Agreement - to send business message request Inbound Agreement - to receive business message response Inbound side: Inbound Agreement - to receive business message request Outbound Agreement - to send business message response Agreement Level Callout - to deliver the inbound request to back-end and get the corresponding business response This feature is supported only for HTTP based transport to exchange messages with Trading Partners. One may initiate the outbound message (enqueue) using any of the available Transports in Oracle B2B. Configuration: Outbound side: Please add "syncresponse=true" as "Additional Transport Header" parameter for remote Trading Partner's HTTP delivery channel configuration. This would enable Oracle B2B to process the HTTP response as inbound message and deliver the same to back-end application. All other configuration related to Agreement and Document setup remain same. Inbound side: There is no change in Agreement and Document setup. To enable "Sync Support", you need to build a 'callout' that takes the responsibility of delivering inbound message to back-end and get the corresponding business response from the back-end and attach the same as its output. Oracle B2B treats the output of 'callout' as outbound message and deliver it to Trading Partner as synchronous HTTP response. The requests that needs to processed synchronously should be received by "syncreceiver" (http://:/b2b/syncreceiver) endpoint in Oracle B2B. Exception Handling: Existing Oracle B2B exception handling applies to this use case as well. Here's the sample callout, SampleSyncCallout.java We will get you second part that talks about 'SOA composites' backed model to design the "Sync Support" use case from back-end to Trading Partners, stay tuned.

    Read the article

  • How to perform feature upgrade in SharePoint2010 part2

    - by ybbest
    In my last post, I showed you how to perform feature upgrade and upgrade my feature from 0.0.0.0 to 1.0.0.1. In this post, I’d like to continue on this topic and upgrade the feature again. For the first version of my solution, I deploy a document library with a custom document set content type and then upgrade the solution so I index the application number column. Now , I will create a new version of the solution so that it will remove the threshold of the list. You can download the solution here. Once you extract your solution, the first version is in the original folder. In order to deploy the original solution, you need to run the sitecreation.ps1 in the script folder. The version 1.1 will be in the Upgrade folder and version 1.2 will be in the Upgrade2 folder. You need to make the following changes to the existing solution. 1. Modify the ApplicationLibrary.Template.xml as highlighted below: 2. Adding the following code into the feature event receiver. </pre> public override void FeatureUpgrading(SPFeatureReceiverProperties properties, string upgradeActionName, System.Collections.Generic.IDictionary<string, string> parameters) { base.FeatureUpgrading(properties, upgradeActionName, parameters); SPWeb web = GetFeatureWeb(properties); SPList applicationLibrary = web.Lists.TryGetList(ApplicationLibraryNamesConstant.ApplicationLibraryName); switch (upgradeActionName) { case "IndexApplicationNumber": if (applicationLibrary != null) { SPField queueField = applicationLibrary.Fields["ApplicationNumber"]; queueField.Indexed = true; queueField.Update(); } break; case "RemoveListThreshold": applicationLibrary.EnableThrottling = false; applicationLibrary.Update(); break; } } <pre> 3. Package your solution and run the feature upgrade PowerShell script. $wspFolder ="v1.2" $scriptPath=Split-Path $myInvocation.MyCommand.Path $siteUrl = "http://ybbest" $featureToCheckGuid="1b9d84cd-227d-45f1-92d4-a43008aa8fe7" $requiredFeatureVersion="1.0.0.1" $siteUrlOfFeatureToBeChecked="http://ybbest" AppendLog "Starting Solution UpgradeSolutionAndFeatures.ps1" Magenta & "$scriptPath\UpgradeSolutionAndFeatures.ps1" $siteUrl $wspFolder $featureToCheckGuid $requiredFeatureVersion $siteUrlOfFeatureToBeChecked Write-Host AppendLog "All features updated" "Green" References: Feature upgrade.

    Read the article

  • An XEvent a Day (7 of 31) – Targets Week – bucketizers

    - by Jonathan Kehayias
    Yesterday’s post, Targets Week - asynchronous_file_target , looked at the asynchronous_file_target Target in Extended Events and how it outputs the raw Event data in an XML document. Continuing with Targets week today, we’ll look at the bucketizer targets in Extended Events which can be used to group Events based on the Event data that is being returned. What is the bucketizer? The bucketizer performs grouping of Events as they are processed by the target into buckets based on the Event data and...(read more)

    Read the article

  • Large File Upload in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Okay this is a big BIG B-I-G problem. And with SP2010 it’s going to be more prominent, because atleast at the server side, SharePoint can support large files much much better than SharePoint 2007 ever did. The issue with very large files being uploaded through any browser based API are - Reliably transferring gigabyte or bigger files without breakages over a protocol like HTTP, which is better suited for tiny transfers like images and text. Not killing your browser because it has to load all that in memory Not killing your web server because All that you upload through HTTP post, first gets streamed into IIS Memory, w3wp.exe memory before the ENTIRE FILE finishes uploading .. before it is stored. Which means, You cannot show an accurate and live progress bar of the upload, IIS gives you no such accurate metric of an upload. All the counters it gives you are approximate. Your w3wp.exe eats up all server memory – 4GB of it, for a 4GB upload. A thread is kept busy for the entire duration of the upload, thereby greatly limiting your web server’s capability to serve newer requests. Kills effective load balancing. Not killing your content database because, As you are uploading a very large file, that large file gets written sequentially into the DB, and therefore for a very large file very severely impacts the database performance. I had put together another video showing RBS usage in SharePoint 2010. I talked about many practical ramifications of using RBS in SharePoint in that video. Note that enabling large file support will never ever be a point and click job, simply because there are too many questions one needs to ask, and too many things one needs to plan for. However, one part that will remain common across all large file upload scenarios, in SharePoint or outside of SharePoint is to do it efficiently while not killing the web server. In this video, I describe using the Telerik Silverlight Upload control with SharePoint 2010 to enable efficient large file uploads in SharePoint. Presenting .. The video Comment on the article ....

    Read the article

  • OWB 11gR2 &ndash; Degenerate Dimensions

    - by David Allan
    Ever wondered how to build degenerate dimensions in OWB and get the benefits of slowly changing dimensions and cube loading? Now its possible through some changes in 11gR2 to make the dimension and cube loading much more flexible. This will let you get the benefits of OWB's surrogate key handling and slowly changing dimension reference when loading the fact table and need degenerate dimensions (see Ralph Kimball's degenerate dimensions design tip). Here we will see how to use the cube operator to load slowly changing, regular and degenerate dimensions. The cube and cube operator can now work with dimensions which have no surrogate key as well as dimensions with surrogates, so you can get the benefit of the cube loading and incorporate the degenerate dimension loading. What you need to do is create a dimension in OWB that is purely used for ETL metadata; the dimension itself is never deployed (its table is, but has not data) it has no surrogate keys has a single level with a business attribute the degenerate dimension data and a dummy attribute, say description just to pass the OWB validation. When this degenerate dimension is added into a cube, you will need to configure the fact table created and set the 'Deployable' flag to FALSE for the foreign key generated to the degenerate dimension table. The degenerate dimension reference will then be in the cube operator and used when matching. Create the degenerate dimension using the regular wizard. Delete the Surrogate ID attribute, this is not needed. Define a level name for the dimension member (any name). After the wizard has completed, in the editor delete the hierarchy STANDARD that was automatically generated, there is only a single level, no need for a hierarchy and this shouldn't really be created. Deploy the implementing table DD_ORDERNUMBER_TAB, this needs to be deployed but with no data (the mapping here will do a left outer join of the source data with the empty degenerate dimension table). Now, go ahead and build your cube, use the regular TIMES dimension for example and your degenerate dimension DD_ORDERNUMBER, can add in SCD dimensions etc. Configure the fact table created and set Deployable to false, so the foreign key does not get generated. Can now use the cube in a mapping and load data into the fact table via the cube operator, this will look after surrogate lookups and slowly changing dimension references.   If you generate the SQL you will see the ON clause for matching includes the columns representing the degenerate dimension columns. Here we have seen how this use case for loading fact tables using degenerate dimensions becomes a whole lot simpler using OWB 11gR2. I'm sure there are other use cases where using this mix of dimensions with surrogate and regular identifiers is useful, Fact tables partitioned by date columns is another classic example that this will greatly help and make the cube operator much more useful. Good to hear any comments.

    Read the article

  • Oracle EBS R12.1.1 system09.dbf file corruption Bug

    - by longchun.zhu
    ??????,??????????????????,???? ?????????????.. ???????????,??????,???????????? After Installing or Upgrading Perform the following steps after installing or upgrading to Release 12.1.1 and before allowing users to access the system. Manually fix database dbf file If you installed 12.1.1 with a startCD of 12.1.1.9 or earlier (see Oracle Applications Release Notes, Release 12.1.1 My Oracle Support Document 798258.1), you must run the following sql commands to fix a particular corrupted dbf file: $ sqlplus/nolog sql connect / as sysdba sql alter database datafile '[full path of system09.dbf]' resize 1000M; sql alter database datafile '[full path of system09.dbf]' resize 1500M;

    Read the article

  • Show raw Text Code from a URL with CodePaste.NET

    - by Rick Strahl
    I introduced CodePaste.NET more than 2 years ago. In case you haven't checked it out it's a code-sharing site where you can post some code, assign a title and syntax scheme to it and then share it with others via a short URL. The idea is super simple and it's not the first time this has been done, but it's focused on Microsoft languages and caters to that crowd. Show your own code from the Web There's another feature that I tweeted about recently that's been there for some time, but is not used very much: CodePaste.NET has the ability to show raw text based code from a URL on the Web in syntax colored format for any of the formats provided. I use this all the time with code links to my Subversion repository which only displays code as plain text. Using CodePaste.NET allows me to show syntax colored versions of the same code. For example I can go from this URL: http://www.west-wind.com:8080/svn/WestwindWebToolkit/trunk/Westwind.Utilities/SupportClasses/PropertyBag.cs To a nicely colored source code view at this Url: http://codepaste.net/ShowUrl?url=http%3A%2F%2Fwww.west-wind.com%3A8080%2Fsvn%2FWestwindWebToolkit%2Ftrunk%2FWestwind.Utilities%2FSupportClasses%2FPropertyBag.cs&Language=C%23 which looks like this:   Use the Form or access URLs directly To get there navigate to the Web Code icon on the CodePaste.NET site and paste your original URL and select a language to display: The form creates a link shown above which has two query string parameters: url - The URL for the raw text on the Web language -  The code language used for syntax highlighting Note that parameters must be URL encoded to work especially the # in C# because otherwise the # will be interpreted by the browser as a hash tag to jump to in the target URL. The URL must be Web accessible so that CodePaste can download it and then apply the syntax coloring. It doesn't work with localhost urls for example. The code returned must be returned in plain text - HTML based text doesn't work. Hope some of you find this a useful feature. Enjoy…© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Add an Opera Style Status Bar to Firefox

    - by Asian Angel
    Anyone who has used Opera will be familiar with the information presented for the webpage that is currently loading in the browser (i.e. number of images loaded). If you would like to have that same functionality in Firefox then join us as we look at the Extended Statusbar extension. Before Here is the default setup for Firefox…not a lot of information available to indicate exactly how much of the webpage has already loaded versus what has not. For some people this is enough but what if you like more details? Extended Statusbar in Action You may be curious about the information that the Extended Statusbar extension will provide. The information includes: Percentage of the webpage loaded The number of images loaded Bytes downloaded Average download speed The load time After emptying the cache we once again reloaded the HTG homepage. The default style/mode is “Classic Style” and the “webpage load information” will be displayed within your “Status Bar” as shown here. The information available after the webpage finished loading in “Classic Style”. If you prefer “Slim Mode” this is how your “Status Bar” should look afterwards…very condensed. For those preferring the “New Style” a temporary addition will appear above your regular “Status Bar” and disappear just a few seconds after the webpage has fully loaded (unless changed in the “Settings”). Settings The “Settings” are set up in two different ways. For those who prefer to use the “Classic Style & Slim Mode” these are the options available to you. If you prefer the “New Style” then you will have a whole different set of options available. Notice that you can exclude certain webpages and set a custom style if desired. Conclusion If you have been wanting to add Opera style webpage loading information to your “Status Bar” then you should definitely give this extension a try. Links Download the Extended Statusbar extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Move the Progress Bar to the Tabs in FirefoxSet the Speed Dial as the Opera Startup PageAuto-Hide Your Cluttered Firefox Status Bar ItemsSimplify Text Copying & Pasting in Firefox with AutoCopyScan Files for Viruses Before You Download With Dr.Web TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 If it were only this easy SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver

    Read the article

  • Crystal Reports for VS deployment to Web Application doesn't work (3 replies)

    I've followed the &quot;documentation&quot; for getting crystal reports to work on a production web site. I've migrated a VS 2003 web project to a VS 2008 web application. Everything works fine on my dev box. publish the site out to the server (2003 X86) and no go on the reports, get the infamous: ***** Error Type: System.IO.FileLoadException ***** Error Message: Could not load file or assembly 'CrystalDeci...

    Read the article

  • Nice network diagram editor?

    - by Nicolas Raoul
    Writing a commercial proposal, I want to create a nice graphic showing the clients the architecture I thought of for their IT network, with servers, network connections, firewall, load-balancing, etc. For years I have been using dia, but I am tired of it, the results are not satisfying, very few network elements are available, and each element's graphic representation is really ugly. Question: How to create nice network diagrams? If a better set of elements was available for dia, that would be a solution.

    Read the article

  • Guidance and Pricing for MSDN 2010

    - by John Alexander
    Sorry for the rather lengthy post here. I get asked this all the time so I decided to post it…Visual Studio 2010 editions will be available on April 12, 2010. Product Features Professional with MSDN Essentials Professional with MSDN Premium with MSDN Ultimate with MSDN Test Professional with MSDN Debugging and Diagnostics IntelliTrace (Historical Debugger)         Static Code Analysis       Code Metrics       Profiling       Debugger   Testing Tools Unit Testing   Code Coverage       Test Impact Analysis       Coded UI Test       Web Performance Testing         Load Testing1         Microsoft Test Manager 2010       Test Case Management2       Manual Test Execution       Fast-Forward for Manual Testing       Lab Management Configuration3       Integrated Development Environment Multiple Monitor Support   Multi-Targeting   One Click Web Deployment   JavaScript and jQuery Support   Extensible WPF-Based Environment Database Development Database Deployment       Database Change Management2       Database Unit Testing       Database Test Data Generation       Data Access   Development Platform Support Windows Development   Web Development   Office and SharePoint Development   Cloud Development   Customizable Development Experience   Architecture and Modeling Architecture Explorer         UML® 2.0 Compliant Diagrams (Activity, Use Case, Sequence, Class, Component)         Layer Diagram and Dependency Validation         Read-only diagrams (UML, Layer, DGML Graphs)         Lab Management Virtual environment setup & tear down3       Provision environment from template3       Checkpoint environment3       Team Foundation Server Version Control2   Work Item Tracking2   Build Automation2   Team Portal2   Reporting & Business Intelligence2   Agile Planning Workbook2   Microsoft Visual Studio Team Explorer 2010   Test Case Management2       MSDN Subscription – Software and Services for Production Use Windows Azure Platform 20 hrs/mo † 50 hrs/mo † 100 hrs/mo † 250 hrs/mo † n/a Microsoft Visual Studio Team Foundation Server 2010   Microsoft Visual Studio Team Foundation Server 2010 CAL   1 1 1 1 Microsoft Expression Studio 3       Microsoft Office Professional Plus 2010, Project Professional 2010, Visio Premium 2010 (following Office 2010 launch)       MSDN Subscription – Software for Development and Testing 4 Windows 7, Windows Server 2008 R2 and SQL Server 2008 Toolkits, Software Development Kits, Driver Development Kits Previous versions of Windows (client and server operation systems)   Previous versions of Microsoft SQL Server   Microsoft Office       Microsoft Dynamics       All other Servers       Windows Embedded operating systems       Teamprise         MSDN Subscription – Other Benefits Technical support incidents 0 2 4 4 2 Priority support in MSDN Forums Microsoft e-learning collections (typically 10 courses or 20 hours) 0 1 2 2 1 MSDN Flash newsletter MSDN Online Concierge MSDN Magazine   System Requirements View View View View View Buy from (MSRP) $799 $1,199 $5,469 $11,899 $2,169 Renew from (MSRP) $549 (upgrade) $799 $2,299 $3,799 $899 † Availability varies by country and subscription level.  Details available on the MSDN site 1. May require one or more Microsoft Visual Studio Load Test Virtual User Pack 2010 2. Requires Team Foundation Server and a Team Foundation Server CAL 3. Requires Microsoft Visual Studio Lab Management 2010 4. Per-user license allows unlimited installations and use for designing, developing, testing, and demonstrating applications. UML is a registered trademark of Object Management Group, Inc. Windows is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries.

    Read the article

  • What to Do When Windows Won’t Boot

    - by Chris Hoffman
    You turn on your computer one day and Windows refuses to boot — what do you do? “Windows won’t boot” is a common symptom with a variety of causes, so you’ll need to perform some troubleshooting. Modern versions of Windows are better at recovering from this sort of thing. Where Windows XP might have stopped in its tracks when faced with this problem, modern versions of Windows will try to automatically run Startup Repair. First Things First Be sure to think about changes you’ve made recently — did you recently install a new hardware driver, connect a new hardware component to your computer, or open your computer’s case and do something? It’s possible the hardware driver is buggy, the new hardware is incompatible, or that you accidentally unplugged something while working inside your computer. The Computer Won’t Power On At All If your computer won’t power on at all, ensure it’s plugged into a power outlet and that the power connector isn’t loose. If it’s a desktop PC, ensure the power switch on the back of its case — on the power supply — is set to the On position. If it still won’t power on at all, it’s possible you disconnected a power cable inside its case. If you haven’t been messing around inside the case, it’s possible the power supply is dead. In this case, you’ll have to get your computer’s hardware fixed or get a new computer. Be sure to check your computer monitor — if your computer seems to power on but your screen stays black, ensure your monitor is powered on and that the cable connecting it to your computer’s case is plugged in securely at both ends. The Computer Powers On And Says No Bootable Device If your computer is powering on but you get a black screen that says something like “no bootable device” or another sort of “disk error” message, your computer can’t seem to boot from the hard drive that Windows was installed on. Enter your computer’s BIOS or UEFI firmware setup screen and check its boot order setting, ensuring that it’s set to boot from its hard drive. If the hard drive doesn’t appear in the list at all, it’s possible your hard drive has failed and can no longer be booted from. In this case, you may want to insert Windows installation or recovery media and run the Startup Repair operation. This will attempt to make Windows bootable again. For example, if something overwrote your Windows drive’s boot sector, this will repair the boot sector. If the recovery environment won’t load or doesn’t see your hard drive, you likely have a hardware problem. Be sure to check your BIOS or UEFI’s boot order first if the recovery environment won’t load. You can also attempt to manually fix Windows boot loader problems using the fixmbr and fixboot commands. Modern versions of Windows should be able to fix this problem for you with the Startup Repair wizard, so you shouldn’t actually have to run these commands yourself. Windows Freezes or Crashes During Boot If Windows seems to start booting but fails partway through, you may be facing either a software or hardware problem. If it’s a software problem, you may be able to fix it by performing a Startup Repair operation. If you can’t do this from the boot menu, insert a Windows installation disc or recovery disk and use the startup repair tool from there. If this doesn’t help at all, you may want to reinstall Windows or perform a Refresh or Reset on Windows 8. If the computer encounters errors while attempting to perform startup repair or reinstall Windows, or the reinstall process works properly and you encounter the same errors afterwards, you likely have a hardware problem. Windows Starts and Blue Screens or Freezes If Windows crashes or blue-screens on you every time it boots, you may be facing a hardware or software problem. For example, malware or a buggy driver may be loading at boot and causing the crash, or your computer’s hardware may be malfunctioning. To test this, boot your Windows computer in safe mode. In safe mode, Windows won’t load typical hardware drivers or any software that starts automatically at startup. If the computer is stable in safe mode, try uninstalling any recently installed hardware drivers, performing a system restore, and scanning for malware. If you’re lucky, one of these steps may fix your software problem and allow you to boot Windows normally. If your problem isn’t fixed, try reinstalling Windows or performing a Refresh or Reset on Windows 8. This will reset your computer back to its clean, factory-default state. If you’re still experiencing crashes, your computer likely has a hardware problem. Recover Files When Windows Won’t Boot If you have important files that will be lost and want to back them up before reinstalling Windows, you can use a Windows installer disc or Linux live media to recover the files. These run entirely from a CD, DVD, or USB drive and allow you to copy your files to another external media, such as another USB stick or an external hard drive. If you’re incapable of booting a Windows installer disc or Linux live CD, you may need to go into your BIOS or UEFI and change the boot order setting. If even this doesn’t work — or if you can boot from the devices and your computer freezes or you can’t access your hard drive — you likely have a hardware problem. You can try pulling the computer’s hard drive, inserting it into another computer, and recovering your files that way. Following these steps should fix the vast majority of Windows boot issues — at least the ones that are actually fixable. The dark cloud that always hangs over such issues is the possibility that the hard drive or another component in the computer may be failing. Image Credit: Karl-Ludwig G. Poggemann on Flickr, Tzuhsun Hsu on Flickr     

    Read the article

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Dynamically creating a Generic Type at Runtime

    - by Rick Strahl
    I learned something new today. Not uncommon, but it's a core .NET runtime feature I simply did not know although I know I've run into this issue a few times and worked around it in other ways. Today there was no working around it and a few folks on Twitter pointed me in the right direction. The question I ran into is: How do I create a type instance of a generic type when I have dynamically acquired the type at runtime? Yup it's not something that you do everyday, but when you're writing code that parses objects dynamically at runtime it comes up from time to time. In my case it's in the bowels of a custom JSON parser. After some thought triggered by a comment today I realized it would be fairly easy to implement two-way Dictionary parsing for most concrete dictionary types. I could use a custom Dictionary serialization format that serializes as an array of key/value objects. Basically I can use a custom type (that matches the JSON signature) to hold my parsed dictionary data and then add it to the actual dictionary when parsing is complete. Generic Types at Runtime One issue that came up in the process was how to figure out what type the Dictionary<K,V> generic parameters take. Reflection actually makes it fairly easy to figure out generic types at runtime with code like this: if (arrayType.GetInterface("IDictionary") != null) { if (arrayType.IsGenericType) { var keyType = arrayType.GetGenericArguments()[0]; var valueType = arrayType.GetGenericArguments()[1]; … } } The GetArrayType method gets passed a type instance that is the array or array-like object that is rendered in JSON as an array (which includes IList, IDictionary, IDataReader and a few others). In my case the type passed would be something like Dictionary<string, CustomerEntity>. So I know what the parent container class type is. Based on the the container type using it's then possible to use GetGenericTypeArguments() to retrieve all the generic types in sequential order of definition (ie. string, CustomerEntity). That's the easy part. Creating a Generic Type and Providing Generic Parameters at RunTime The next problem is how do I get a concrete type instance for the generic type? I know what the type name and I have a type instance is but it's generic, so how do I get a type reference to keyvaluepair<K,V> that is specific to the keyType and valueType above? Here are a couple of things that come to mind but that don't work (and yes I tried that unsuccessfully first): Type elementType = typeof(keyvalue<keyType, valueType>); Type elementType = typeof(keyvalue<typeof(keyType), typeof(valueType)>); The problem is that this explicit syntax expects a type literal not some dynamic runtime value, so both of the above won't even compile. I turns out the way to create a generic type at runtime is using a fancy bit of syntax that until today I was completely unaware of: Type elementType = typeof(keyvalue<,>).MakeGenericType(keyType, valueType); The key is the type(keyvalue<,>) bit which looks weird at best. It works however and produces a non-generic type reference. You can see the difference between the full generic type and the non-typed (?) generic type in the debugger: The nonGenericType doesn't show any type specialization, while the elementType type shows the string, CustomerEntity (truncated above) in the type name. Once the full type reference exists (elementType) it's then easy to create an instance. In my case the parser parses through the JSON and when it completes parsing the value/object it creates a new keyvalue<T,V> instance. Now that I know the element type that's pretty trivial with: // Objects start out null until we find the opening tag resultObject = Activator.CreateInstance(elementType); Here the result object is picked up by the JSON array parser which creates an instance of the child object (keyvalue<K,V>) and then parses and assigns values from the JSON document using the types  key/value property signature. Internally the parser then takes each individually parsed item and adds it to a list of  List<keyvalue<K,V>> items. Parsing through a Generic type when you only have Runtime Type Information When parsing of the JSON array is done, the List needs to be turned into a defacto Dictionary<K,V>. This should be easy since I know that I'm dealing with an IDictionary, and I know the generic types for the key and value. The problem is again though that this needs to happen at runtime which would mean using several Convert.ChangeType() calls in the code to dynamically cast at runtime. Yuk. In the end I decided the easier and probably only slightly slower way to do this is a to use the dynamic type to collect the items and assign them to avoid all the dynamic casting madness: else if (IsIDictionary) { IDictionary dict = Activator.CreateInstance(arrayType) as IDictionary; foreach (dynamic item in items) { dict.Add(item.key, item.value); } return dict; } This code creates an instance of the generic dictionary type first, then loops through all of my custom keyvalue<K,V> items and assigns them to the actual dictionary. By using Dynamic here I can side step all the explicit type conversions that would be required in the three highlighted areas (not to mention that this nested method doesn't have access to the dictionary item generic types here). Static <- -> Dynamic Dynamic casting in a static language like C# is a bitch to say the least. This is one of the few times when I've cursed static typing and the arcane syntax that's required to coax types into the right format. It works but it's pretty nasty code. If it weren't for dynamic that last bit of code would have been a pretty ugly as well with a bunch of Convert.ChangeType() calls to litter the code. Fortunately this type of type convulsion is rather rare and reserved for system level code. It's not every day that you create a string to object parser after all :-)© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Don’t Delay - Apply the New 12.1.3 Procurement Rollup Patch NOW!

    - by user793553
    A new critical rollup patch (RUP) has just been released by Development for our 12.1.3 Procurement customers.  This new Patch 14254641:R12.PRC_PF.B contains important fixes for Purchasing, Internet Supplier Portal (iSupplier), Sourcing  and iProcurement (Web).  Go to My Oracle Support and enter Document ID 1468883.1 in the Knowledge Base search. This note contains information on who should apply the patch, how to apply the patch, critical fixes and important new features.

    Read the article

  • Video on Architecture and Code Quality using Visual Studio 2012&ndash;interview with Marcel de Vries and Terje Sandstrom by Adam Cogan

    - by terje
    Find the video HERE. Adam Cogan did a great Web TV interview with Marcel de Vries and myself on the topics of architecture and code quality.  It was real fun participating in this session.  Although we know each other from the MVP ALM community,  Marcel, Adam and I haven’t worked together before. It was very interesting to see how we agreed on so many terms, and how alike we where thinking.  The basics of ensuring you have a good architecture and how you could document it is one thing.  Also, the same agreement on the importance of having a high quality code base, and how we used the Visual Studio 2012 tools, and some others (NDepend for example)  to measure and ensure that the code quality was where it should be.  As the tools, methods and thinking popped up during the interview it was a lot of “Hey !  I do that too!”.  The tools are not only for “after the fact” work, but we use them during the coding.  That way the tools becomes an integrated part of our coding work, and helps us to find issues we may have overlooked.  The video has a bunch of call outs, pinpointing important things to remember. These are also listed on the corresponding web page. I haven’t seen that touch before, but really liked this way of doing it – it makes it much easier to spot the highlights.  Titus Maclaren and Raj Dhatt from SSW have done a terrific job producing this video.  And thanks to Lei Xu for doing the camera and recording job.  Thanks guys ! Also, if you are at TechEd Amsterdam 2012, go and listen to Adam Cogan in his session on “A modern architecture review: Using the new code review tools” Friday 29th, 10.15-11.30 and Marcel de Vries session on “Intellitrace, what is it and how can I use it to my benefit” Wednesday 27th, 5-6.15 The highlights points out some important practices.  I’ll elaborate on a few of them here: Add instructions on how to compile the solution.  You do this by adding a text file with instructions to the solution, and keep it under source control.  These instructions should contain what is needed on top of a standard install of Visual Studio.  I do a lot of code reviews, and more often that not, I am not even able to compile the program, because they have used some tool or library that needs to be installed.  The same applies to any new developer who enters into the team, so do this to increase your productivity when the team changes, or a team member switches computer. Don’t forget to document what you have to configure on the computer, the IIS being a common one. The more automatic you can do this, the better.  Use NuGet to get down libraries. When the text document gets more than say, half a page, with a bunch of different things to do, convert it into a powershell script instead.  The metrics warning levels.  These are very conservatively set by Microsoft.  You rarely see anything but green, and besides, you should have color scales for each of the metrics.  I have a blog post describing a more appropriate set of levels, based on both research work and industry “best practices”.  The essential limits are: Cyclomatic complexity and coupling:  Higher numbers are worse On method levels: Green :  From 0 to 10 Yellow:  From 10 to 20  (some say 15).   Acceptable, but have a look to see if there is something unneeded here. Red: From 20 to 40:   Action required, get these down. Bleeding Red: Above 40   This is the real red alert.  Immediate action!  (My invention, as people have asked what do I do when I have cyclomatic complexity of 150.  The only answer I could think of was: RUN! ) Maintainability index:  Lower numbers are worse, scale from 0 to 100. On method levels: Green:  60 to 100 Yellow:  40 – 60.    You will always have methods here too, accept the higher ones, take a look at those who are down to the lower limit.  Check up against the other metrics.) Red:  20 – 40:  Action required, fix these. Bleeding red:  Below 20.  Immediate action required. When doing metrics analysis, you should leave the generated code out.  You do this by adding attributes, unfortunately Microsoft has “forgotten” to add these to all their stuff, so you might have to add them to some of the code.  It most cases it can be done so that it is not overwritten by a new round of code generation.  Take a look a my blog post here for details on how to do that. Class level metrics might also be useful, at least for coupling and maintenance.  But it is much more difficult to set any fixed limits on those.  Any metric aggregations on higher level tend to be pretty useless, as the number of methods vary pretty much, and there are little science on what number of methods can be regarded as good or bad.  NDepend have a recommendation, but they say it may vary too.  And in these days of data binding, the number might be pretty high, as properties counts as methods.  However, if you take the worst case situations, classes with more than 20 methods are suspicious, and coupling and cyclomatic complexity go red above 20, so any classes with more than 20x20 = 400 for these measures should be checked over. In the video we mention the SOLID principles, coined by “Uncle Bob” (Richard Martin). One of them, the Dependency Inversion principle we discuss in the video.  It is important to note that this principle is NOT on whether you should use a Dependency Inversion Container or not, it is about how you design the interfaces and interactions between your classes.  The Dependency Inversion Container is just one technique which is based on this principle, but which main purpose is to isolate things you would like to change at runtime, for example if you implement a plug in architecture.  Overuse of a Dependency Inversion Container is however, NOT a good thing.  It should be used for a purpose and not as a general DI solution.  The general DI solution and thinking however is useful far beyond the DIC.   You should always “program to an abstraction”, and not to the concreteness.  We also talk a bit about the GRASP patterns, a term coined by Craig Larman in his book Applying UML and design patterns. GRASP patterns stand for General Responsibility Assignment Software Patterns and describe fundamental principles of object design and responsibility assignment.  What I find great with these patterns is that they is another way to focus on the responsibility of a class.  One of the things I most often found that is broken in software designs, is that the class lack responsibility, and as a result there are a lot of classes mucking around in the internals of the other classes.  We also discuss the term “Code Smells”.  This term was invented by Kent Beck and Martin Fowler when they worked with Fowler’s “Refactoring” book. A code smell is a set of “bad” coding practices, which are the drivers behind a corresponding set of refactorings.  Here is a good list of the smells, and their corresponding refactor patterns. See also this.

    Read the article

  • Error trapping for a missing data source in a Spring MVC / Spring JDBC web app [migrated]

    - by Geeb
    I have written a web app that uses Spring MVC libraries and Spring JDBC to connect to an Oracle DB. (I don't use any ORM type libraries as I create stored procedures on Oracle that do my stuff and I'm quite happy with that.) I use a connection pool to Oracle managed by the Tomcat container The app generally works absolutely fine by the way! BUT... I noticed the other day when I tried to set up the app on another Tomcat instance that I had forgotten to configure the connection pool and obviously the app could not get hold of an org.apache.commons.dbcp.BasicDataSource object, so it crashed. I define the pool params in the tomcat "context.conf" In my "web.xml" I have: <servlet> <servlet-name>appServlet</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/Spring/appServlet/servlet-context.xml</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>appServlet</servlet-name> <!-- Map *everything* to appServlet --> <url-pattern>/</url-pattern> </servlet-mapping> <resource-ref> <description>Oracle Datasource example</description> <res-ref-name>jdbc/ora1</res-ref-name> <res-type>org.apache.commons.dbcp.BasicDataSource</res-type> <res-auth>Container</res-auth> </resource-ref> And I have a Spring "servlet-context.xml" where JNDI is used to map the data source object provided by the connection pool to a Spring bean with the ID of "dataSource": <jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/ora1" resource-ref="true" /> Here's the question: Where do I trap the case where the database cannot be accessed for whatever reason? I don't want the user to see a yard-and-a-half of Java stack trace in their browser, rather a nicer message that tells them there is a database problem etc. It seems that my app tries to configure the "dataSource" bean (in "servlet-context.xml") before any code has tested it can actually provide a dataSource object from the pool?! Maybe I'm not fully understanding exactly what is going on in these stages of the app firing up ... Thanks for any advice!

    Read the article

  • Survey: Do you write custom SQL CLR procedures/functions/etc

    - by James Luetkehoelter
    I'm quite curious because despite the great capabilities of writing CLR-based stored procedures to off-load those nasty operations TSQL isn't that great at (like iteration, or complex math), I'm continuing to see a wealth of SQL 2008 databases with complex stored procedures and functions which would make great candidates. The in-house skill to create the CLR code exists as well, but there is flat out resistance to use it. In one scenario I was told "Oh, iteration isn't a problem because we've trained...(read more)

    Read the article

  • Fixing a SkyDrive Sync Disaster

    - by Rick Strahl
    For a few months I've been using SkyDrive to handle some basic synching tasks for a number of folders of mine. Specifically I've been dumping a few of my development folders into sky drive so I have a live running backup. It had been working just fine until about a week ago when something went awry. Badly! The idea is that the SkyDrive should sync files, but somewhere in its sync relationship it appears that SkyDrive got confused and assumed it needed to sync back older files to my local machine from the SkyDrive server. So rather than syncing my newer files to the server SkyDrive was pushing older files back to me. Because SkyDrive is so slow actually updating data it's not unusual for SkyDrive to be far behind in syncing and apparently some files were out of date by several months. Of course this is insidious because I didn't notice it for quite some time. I'd been happily working away on my files when a few days ago I noted a bunch of files with -RasXps (my machine name) popping up in various folders. At first I thought my Git repository was giving me a fit, but eventually realized that SkyDrive was actually pushing old files into my monitored folders. To be fair SkyDrive did make backups of the existing files, but by the time I caught it there were literally a few thousand files scattered on my machine that were now updated with old files from online. Here's what some of this looks like: If you look at the directory list you see a bunch of files with a -RasXps postfix appended to them. Those are the files that SkyDrive replaced and backed up on my machine. As you can see the backed up files are actually newer than the ones it pulled from the online SkyDrive. Unless I modified the files after they were updated they all were older than the existing local files. Not exactly how I imagined my synching would work. At first I started cleaning up this mess manually. In most cases the obvious solution was to simply delete the original file and replace with the -RasXps file, but not in all files. Some scrutiny was required and besides being a pain in the ass to rename files, quite frequently I had to dig out Beyond Compare to compare a few files where it wasn't quite clear what's wrong. I quickly realized that doing this by hand would be too hard for the large number of files that got hosed. Hacking together a small .NET Utility So, I figured the easiest way to tackle this is to write a small utility app that shows me all the mangled files that have backups, allows me to compare them and then quickly select and update them, removing the -RasXps file after choosing one of the two files. What I ended up with was a quick and dirty WinForms app that allows me to pick a root folder, and then shows all the -MachineName files: I start by picking a base folder and a template to search for - typically the -MachineName. Clicking Go brings up a list of all files in that folder and its subdirectories.  The list also displays the dates for the saved (-MachineName) file and the current file on disk, along with highlighting for the newer of the two. I can right click on any file and get a context menu pop up to open the folder in Explorer, or open Beyond Compare and view the two files to compare differences which I found very helpful for a number of files where I had modified the files after SkyDrive had updated to an old one. Typically these would be the green files (of which there were thankfully few). To 'fix' files I can select any number of files in the list, then use one of the three buttons on the right to apply an operation. I can use the Saved files - that is the backup file that SkyDrive created with the -MachineName extension (-RasXps above). Or I can use the current file, which is the file with the right name on disk right now and delete the -MachineName file. Or on some occasions I can just opt to delete both of them. For some files like binaries it's often easier to just delete and them be rebuild than choosing. For the most part the process involves accepting the pink files, and checking the few green files and see if any modifications were made since the file was updated incorrectly by SkyDrive. For me luckily those are few in number. Anyways, I thought I share this utility in case anybody else runs into this issue. I've included the VS2012 solution and all the source code so you can see how it works and you can tweak it as needed. The .NET 4.5 binaries are also included if you can't compile. Be warned though!  This rough code is provided as is and makes no guarantees or claims about file safety. All three of the action buttons on the form will delete data. It's a very rough utility and there are no safeguards that ask nicely before deleting files. I highly recommend you make a backup before you have at it. This tools is very narrow in focus, but it might also work with other sync issues from other vendors. I seem to remember that I had similar issues with SugarSync at some point and it too created the -MachineName style files on sync conflicts. Hope this helps somebody out so you can avoid wasting the better part of a full work day on this… Resources Download the Source Code and Binaries for SkyDrive Rescue© Rick Strahl, West Wind Technologies, 2005-2013Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Big Data – Buzz Words: What is NoSQL – Day 5 of 21

    - by Pinal Dave
    In yesterday’s blog post we explored the basic architecture of Big Data . In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – NoSQL. What is NoSQL? NoSQL stands for Not Relational SQL or Not Only SQL. Lots of people think that NoSQL means there is No SQL, which is not true – they both sound same but the meaning is totally different. NoSQL does use SQL but it uses more than SQL to achieve its goal. As per Wikipedia’s NoSQL Database Definition – “A NoSQL database provides a mechanism for storage and retrieval of data that uses looser consistency models than traditional relational databases.“ Why use NoSQL? A traditional relation database usually deals with predictable structured data. Whereas as the world has moved forward with unstructured data we often see the limitations of the traditional relational database in dealing with them. For example, nowadays we have data in format of SMS, wave files, photos and video format. It is a bit difficult to manage them by using a traditional relational database. I often see people using BLOB filed to store such a data. BLOB can store the data but when we have to retrieve them or even process them the same BLOB is extremely slow in processing the unstructured data. A NoSQL database is the type of database that can handle unstructured, unorganized and unpredictable data that our business needs it. Along with the support to unstructured data, the other advantage of NoSQL Database is high performance and high availability. Eventual Consistency Additionally to note that NoSQL Database may not provided 100% ACID (Atomicity, Consistency, Isolation, Durability) compliance.  Though, NoSQL Database does not support ACID they provide eventual consistency. That means over the long period of time all updates can be expected to propagate eventually through the system and data will be consistent. Taxonomy Taxonomy is the practice of classification of things or concepts and the principles. The NoSQL taxonomy supports column store, document store, key-value stores, and graph databases. We will discuss the taxonomy in detail in later blog posts. Here are few of the examples of the each of the No SQL Category. Column: Hbase, Cassandra, Accumulo Document: MongoDB, Couchbase, Raven Key-value : Dynamo, Riak, Azure, Redis, Cache, GT.m Graph: Neo4J, Allegro, Virtuoso, Bigdata As of now there are over 150 NoSQL Database and you can read everything about them in this single link. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >