Search Results

Search found 5380 results on 216 pages for 'primary'.

Page 116/216 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • The Windows Azure Software Development Kit (SDK) and the Windows Azure Training Kit (WATK)

    - by BuckWoody
    Windows Azure is a platform that allows you to write software, run software, or use software that we've already written. We provide lots of resources to help you do that - many can be found right here in this blog series. There are two primary resources you can use, and it's important to understand what they are and what they do. The Windows Azure Software Development Kit (SDK) Actually, this isn't one resource. We have SDK's for multiple development environments, such as Visual Studio and also Eclipse, along with SDK's for iOS, Android and other environments. Windows Azure is a "back end", so almost any technology or front end system can use it to solve a problem. The SDK's are primarily for development. In the case of Visual Studio, you'll get a runtime environment for Windows Azure which allows you to develop, test and even run code all locally - you do not have to be connected to Windows Azure at all, until you're ready to deploy. You'll also get a few samples and codeblocks, along with all of the libraries you need to code with Windows Azure in .NET, PHP, Ruby, Java and more. The SDK is updated frequently, so check this location to find the latest for your environment and language - just click the bar that corresponds to what you want: http://www.windowsazure.com/en-us/develop/downloads/ The Windows Azure Training Kit (WATK) Whether you're writing code, using Windows Azure Virtual Machines (VM's) or working with Hadoop, you can use the WATK to get examples, code, PowerShell scripts, PowerPoint decks, training videos and much more. This should be your second download after the SDK. This is all of the training you need to get started, and even beyond. The WATK is updated frequently - and you can find the latest one here: http://www.windowsazure.com/en-us/develop/net/other-resources/training-kit/     There are many other resources - again, check the http://windowsazure.com site, the community newsletter (which introduces the latest features), and my blog for more.

    Read the article

  • DTrace for Oracle Linux news: new beta release and conference appearances

    - by Lenz Grimmer
    A new set of RPM packages of our port of DTrace for Linux has just been published on the Unbreakable Linux Network. This is another beta release of our ongoing development effort to bring the DTrace framework to Linux. This release includes the following changes: The packages are now based on the final public release of the Unbreakable Enterprise Kernel Release 2 (2.6.39). The previous beta drop was based on a development version of the 2.6.39 kernel; there is no new functionality specific to DTrace in this release. The primary goal was to get the code base in sync with the released kernel version. Based on the feedback we received from some users in how their applications interact with dtrace, libdtrace is now a shared library. However, the API/ABI is not fully stabilized yet and may be subject to change. As a result of the ongoing QA testing, some test cases were reorganized into their own subdirectories, which allows running the test suite in a more fine-grained manner. As reminder, we have a dedicated Forum for DTrace on Linux, to discuss your experiences with this release. This week, the Linux DTrace team also attendeded the second dtrace.conf in San Francisco, to talk about their work. The sessions were streamed live and recordings are also available. You can watch Oracle's Kris Van Hees' talk below: Video streaming by Ustream We would like to thank the dtrace.conf organizers for the speaking opportunity and for organizing this event! This Wednesday (April 4th), Kris and Elena Zannoni also spoke on this topic at the Linux Foundation Collaboration Summit 2012 in San Francisco, CA. The slides are now available for download (PDF).

    Read the article

  • Python or Ruby for freelance?

    - by Sophia
    Hello, I'm Sophia. I have an interest in self-learning either Python, or Ruby. The primary reason for my interest is to make my life more stable by having freelance work = $. It seems that programming offers a way for me to escape my condition of poverty (I'm on the edge of homelessness right now) while at the same time making it possible for me to go to uni. I intend on being a math/philosophy major. I have messed with Python a little bit in the past, but it didn't click super well. The people who say I should choose Python say as much because it is considered a good first language/teaching language, and that it is general-purpose. The people who say I should choose Ruby point out that I'm a very right-brained thinker, and having multiple ways to do something will make it much easier for me to write good code. So, basically, I'm starting this thread as a dialog with people who know more than I do, as an attempt to make the decision. :-) I've thought about asking this in stackoverflow, but they're much more strict about closing threads than here, and I'm sort of worried my thread will be closed. :/ TL;DR Python or Ruby for freelance work opportunities ($) as a first language? Additional question (if anyone cares to answer): I have a personal feeling that if I devote myself to learning, I'd be worth hiring for a project in about 8 weeks of work. I base this on a conservative estimate of my intellectual capacities, as well as possessing motivation to improve my life. Is my estimate necessarily inaccurate? random tidbit: I'm in Portland, OR I'll answer questions that are asked of me, if I can help the accuracy and insight contained within the dialog.

    Read the article

  • Enterprise Architecture IS (should not be) Arbitrary

    - by pat.shepherd
    I took a look at a blog entry today by Jordan Braunstein where he comments on another blog entry titled “Yes, “Enterprise Architecture is Relative BUT it is not Arbitrary.”  The blog makes some good points such as the following: Lock 10 architects in 10 separate rooms; provide them all an identical copy of the same business, technical, process, and system requirements; have them design an architecture under the same rules and perspectives; and I guarantee your result will be 10 different architectures of varying degrees. SOA Today: Enterprise Architecture IS Arbitrary Agreed, …to a degree….but less so if all 10 truly followed one of the widely accepted EA frameworks. My thinking is that EA frameworks all focus on getting the business goals/vision locked down first as the primary drivers for decisions made lower down the architecture stack.  Many people I talk to, know about frameworks such as TOGAF, FEA, etc. but seldom apply the tenants to the architecture at hand.  We all seem to want to get right into the Visio diagrams and boxes and arrows and connecting protocols and implementation details and lions and tigers and bears (Oh, my!) too early. If done properly the Business, Application and Information architectures are nailed down BEFORE any technological direction (SOA or otherwise) is set.  Those 3 layers and Governance (people and processes), IMHO, are layers that should not vary much as they have everything to do with understanding the business -- from which technological conclusions can later be drawn. I really like what he went on to say later in the post about the fact that architecture attempts to remove the amount of variance between the 10 different architect’s work.  That is the real heart of what EA is about; REMOVING THE ARBRITRARITY.

    Read the article

  • Unleash AutoVue on Your Unmanaged Data

    - by [email protected]
    Over the years, I've spoken to hundreds of customers who use AutoVue to collaborate on their "managed" data stored in content management systems, product lifecycle management systems, etc. via our many integrations. Through these conversations I've also learned a harsh reality - we will never fully move away from unmanaged data (desktops, file servers, emails, etc). If you use AutoVue today you already know that even if your primary use is viewing content stored in a content management system, you can still open files stored locally on your computer. But did you know that AutoVue actually has - built-in - a great solution for viewing, printing and redlining your data stored on file servers? Using the 'Server protocol' you can point AutoVue directly to a top-level location on any networked file server and provide your users with a link or shortcut to access an interface similar to the sample page shown below. Many customers link to pages just like this one from their internal company intranets. Through this webpage, users can easily search and browse through file server data with a 'click-and-view' interface to find the specific image, document, drawing or model they're looking for. Any markups created on a document will be accessible to everyone else viewing that document and of course real-time collaboration is supported as well. Customers on maintenance can consult the AutoVue Admin guide or My Oracle Support Doc ID 753018.1 for an introduction to the server protocol. Contact your local AutoVue Solutions Consultant for help setting up the sample shown above.

    Read the article

  • AJAX Control Toolkit - Globalization (Language)

    - by Guilherme Cardoso
    For those who use AjaxToolKit controls presenting a language-dependent data (the CalendarExternder with the names of the months and weeks, for example) you can change the language to be presented in a simple way. In Web.Config let's define the primary culture as follows: < system . web > <System. Web> < globalization uiCulture = "pt-pt" culture = "pt-pt" /> <Globalization UICulture = "en-us" culture = "en-us" /> ... ...  In this example I'm using Portuguese. To finish it is necessary to change our ScriptManager. Be it the ToolScriptManager AjaxToolKit or belonging to the ScriptManager's framework. NET, the properties as a vibrant and true are the EnableScriptGlobalization EnableScriptLocalization. < cc1 : ToolkitScriptManager ID = "ToolkitScriptManager1" runat = "server" EnableScriptGlobalization = "true" EnableScriptLocalization = "true" > <Cc1: ToolkitScriptManager ID = "ToolkitScriptManager1" runat = "server" EnableScriptGlobalization = "true" EnableScriptLocalization = "true"> </ cc1 : ToolkitScriptManager > </ Cc1: ToolkitScriptManager> or < asp : ScriptManager ID = "ScriptManager1" runat = "server" EnableScriptGlobalization = "true" EnableScriptLocalization = "true" > <Asp: ScriptManager ID = "ScriptManager1" runat = "server" EnableScriptGlobalization = "true" EnableScriptLocalization = "true"> </ asp : ScriptManager > </ Asp: ScriptManager>   Like hus we use the controls on AjaxToolKit, always using the Portuguese language. It is important that AjaxTookKit is updated to avoid shortages or errors in translation, though I have not updated by this error in the ModalPopup the latest version, and any controls that I have used are translated correctly.

    Read the article

  • Blogging After The Blog Boom

    - by Tim Murphy
    I have been blogging on Geeks With Blogs since 2005 and on other blogging sites before that.  In this age of Twitter, Facebook and G+ it feels like we are in the post-blog age and yet here I continue.  There are several reasons for this.  The first is that I still find it to be the best place for self publishing long form thought that won’t fit well on Twitter or Facebook.  Google+ allows for this type of content, but it suffers from the same scroll factor as the other social media platforms.  If you aren’t looking at the right moment you miss it.  On a blog I can put complete thoughts with examples and people can find what they want via key words or search engine. The second reason I blog is to have a place for me to put information I want to be able to reference back to later.  Although I use OneNote which is now accessible everywhere the blog gives me somewhere to refer co-workers and clients when I have solutions for problems I have previously solved. I know that other people use their blog as a resume builder, but that hasn’t been one of my primary concerns.  Don’t get me wrong.  Opportunities do come up because you put out well thought out, topical material.  That just isn’t one of my top motivators. I don’t always find the time to blog or even have anything to say lately, but I will continue to produce content for myself and others to learn from and hopefully enjoy. del.icio.us Tags: Blogging,Social Media

    Read the article

  • How to open a VirtualBox (.VDI) Virtual Machine

    - by [email protected]
     How to open a .VDI Virtual MachineSometimes someone share with us one Virtual machine with extension .VDI, after that we can wonder how and what with?Well the answer is... It is a VirtualBox - Virtual Machine. If you have not downloaded it you can do this easily, just follow this post.http://listeningoracle.blogspot.com/2010/04/que-es-virtualbox.htmlorhttp://oracleoforacle.wordpress.com/2010/04/14/ques-es-virtualbox/Ok, Now with VirtualBox Installed open it and proceed with the following:1. Open the Virtual File Manager. 2. Click on Actions ? Add and select the .VDI fileClick "Ok"3.  A new Virtual machine will be displayed, (in this Case, an OEL5 32GB Virtual Machine is available.)4. This step is important. Once you have open the settings, under General option click the advanced settings. Here you must change the default directory to save your Snapshots; my recommendation set it to the same directory where the .Vdi file is. Otherwise you can have the same Virtual Machine and its snapshots in different paths.5. Now Click on System, and proceed to assign the correct memory and define the processors for the Virtual machine. Note: Enable  "Enable IO APIC" if you are planning to assign more than one CPU to the Virtual Machine.6. Associated the storage disk to the Virtual machineThe disk must be selected as IDE Primary Master. 7. Well you can verify the other options, but with these changes you will be able to start the VM. Note: Sometime the VM owner may share some instructions, if so follow his instructions.8. Click Ok and Push Start Button, and enjoy your Virtual Machine

    Read the article

  • How to open a VirtualBox (.VDI) Virtual Machine

    - by [email protected]
     How to open a .VDI Virtual MachineSometimes someone share with us one Virtual machine with extension .VDI, after that we can wonder how and what with?Well the answer is... It is a VirtualBox - Virtual Machine. If you have not downloaded it you can do this easily, just follow this post.http://listeningoracle.blogspot.com/2010/04/que-es-virtualbox.htmlorhttp://oracleoforacle.wordpress.com/2010/04/14/ques-es-virtualbox/Ok, Now with VirtualBox Installed open it and proceed with the following:1. Open the Virtual File Manager. 2. Click on Actions ? Add and select the .VDI fileClick "Ok"3.  A new Virtual machine will be displayed, (in this Case, an OEL5 32GB Virtual Machine is available.)4. This step is important. Once you have open the settings, under General option click the advanced settings. Here you must change the default directory to save your Snapshots; my recommendation set it to the same directory where the .Vdi file is. Otherwise you can have the same Virtual Machine and its snapshots in different paths.5. Now Click on System, and proceed to assign the correct memory and define the processors for the Virtual machine. Note: Enable  "Enable IO APIC" if you are planning to assign more than one CPU to the Virtual Machine.6. Associated the storage disk to the Virtual machineThe disk must be selected as IDE Primary Master. 7. Well you can verify the other options, but with these changes you will be able to start the VM. Note: Sometime the VM owner may share some instructions, if so follow his instructions.8. Click Ok and Push Start Button, and enjoy your Virtual Machine

    Read the article

  • Updating Banshee to 2.4

    - by Lucasguy11
    I have banshee 2.2.1 with Ubuntu 11.10 I have been trying to update banshee to 2.4 (released yesterday) but it just isnt working, I have been using sudo add-apt-repository ppa:banshee-team/ppa in terminal, from the Banshee.fm website. but after running through terminal it says this: sudo add-apt-repository ppa:banshee-team/ppa You are about to add the following PPA to your system: PPA for Banshee Team This PPA contains the latest stable debs of Banshee for Ubuntu. To install Banshee, you must first enable the PPA on your system: 1. Open Software Sources (System->Administration->Software Sources) 2. Navigate to the "Third Party Sources" tab. 3. Click "Add" 4. Enter the APT line below that corresponds to your Ubuntu version that starts with "deb". 5. Click "Add Source" 6. Click "Close" 7. It will prompt you to reload your software cache. Click "Reload". 8. Now install the package "banshee" from Synaptic, or using the command below: sudo apt-get install banshee For those who wish to compile from trunk, add the deb-src line and then run "sudo apt-get build-dep" to install all required dependencies before starting to compile. Unstable (version which have odd minor version numbers) debs of Banshee can be found here: https://launchpad.net/~banshee-team/+archive/banshee-unstable More info: https://launchpad.net/~banshee-team/+archive/ppa Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.OPAjxemDQr --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 9D2C2E0A3C88DD807EC787D74874D3686E80C6B7 gpg: requesting key 6E80C6B7 from hkp server keyserver.ubuntu.com gpg: key 6E80C6B7: "Launchpad PPA for Banshee Team" not changed gpg: Total number processed: 1 gpg: unchanged: 1 I believe I have the ppa but, im not sure. I need a step by step process to get this, ive been trying to figure it out for quite a while now...

    Read the article

  • DNS: forward part of a managed domain to one host, but sub domain services to another (Google Apps)

    - by Paul Zee
    I was going to post this as a comment against DNS: Forward domain to another host, but I don't seem able to do that. I'm in a similar situation. I have a DNS registered/managed by enom, except with the slight twist that the domain was originally registered with enom through a Google Apps account creation. The domain currently supports a Google Apps site/account. I now want to direct the bare primary domain and www entries to a hosting provider for the website component, but leave the Google Apps setup intact for its services such as calendar, mail etc. For now, I'm leaving the domain managed by enom. Also note that when I registered my account with the hosting provider, I gave the same domain name as the existing domain (e.g. example.com), so at their end I'm working with the same domain name in cpanel, etc. In my case, the existing enom DNS entries don't have an A record for the www.example.com, or the bare example.com domain. Instead, there are 4 x @ records with the Google Apps IP Address, 2 x TXT records with what I assume are Google Apps site verification strings/tokens, and a bunch of CNAME records for the various features of Google Apps (mail, calendar, docs, sites, etc). So, my questions: How do I point the www.example.com and example.com DNS entries at enom to my web site hosting provider, while leaving the domain managed by enom, and the Google Apps services working as they are now (with the obvious exception of Google Sites)? How do I setup the example.com mail-related DNS records (MX, etc) at the web site hosting provider, so that outbound email to [email protected] gets correctly sent to the google apps mail account, and doesn't get trapped inside the pseudo domain within the hosting providers servers?

    Read the article

  • How to Run Pam Face Authentication

    - by Supriyo Banerjee
    I am using Ubuntu 11.10. I went to the following URL to download the software 'Pam Face Authentication': http://ppa.launchpad.net/antonio.chiurazzi/ppa/ubuntu/pool/main/p/pam-face-authentication/ and downloaded the version for natty narhwall. I installed the software using the following commands: sudo apt-get install build-essential cmake qt4-qmake libx11-dev libcv-dev libcvaux-dev libhighgui2.1 libhighgui-dev libqt4-dev libpam0g-dev checkinstall cd /tmp && wget http://pam-face-authentication.googlecode.com /files/pam-face-authentication-0.3.tar.gz sudo add-apt-repository ppa:antonio.chiurazzi sudo apt-get update sudo apt-get install pam-face-authentication cat << EOF | sudo tee /usr/share/pam-configs/face_authentication /dev/null Name: face_authentication profile Default: yes Priority: 900 Auth-Type: Primary Auth: [success=end default=ignore] pam_face_authentication.so enableX EOF sudo pam-auth-update --package face_authentication The software installed and I can run the qt-facetrainer. But the problem is when I restarted my system, I saw that the default login screen is appearing where I should put my password to login. The webcam is not starting at all. And I cannot login with my face. Which means I think that pam face authentication programme is not starting at all. Please let me know how I can login with my face using pam face authentication programme.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 3 (sys.dm_exec_connections)

    - by Tamarick Hill
      The third DMV we will review is the sys.dm_exec_connections DMV. This DMV is Server-Scoped and displays information about each and every current connection on your SQL Server Instance. Lets take a look at some information that this DMV returns. SELECT * FROM sys.dm_exec_connections After reviewing this DMV, in my opinion, its not a whole lot of useful information returned from this DMV from a monitoring or troubleshooting standpoint. The primary use case I have for this DMV is when I need to get a quick count of how many connections I have on one of my SQL Server boxes. For this purpose a quick SELECT COUNT(*) satisfies my need. However, for those who need it, there is other information such as what type of authentication a specific connection is using, network packet size, and client/local TCP ports being used. This information can come in handy for specific scenarios but you probably wont need it very much for your day to day monitoring/troubleshooting needs. However, this is still an important DMV that you should be aware of in the event that you need it. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms181509.aspx

    Read the article

  • NEW 2-Day Instructor Led Course on Oracle Data Mining Now Available!

    - by chberger
    A NEW 2-Day Instructor Led Course on Oracle Data Mining has been developed for customers and anyone wanting to learn more about data mining, predictive analytics and knowledge discovery inside the Oracle Database.  Course Objectives: Explain basic data mining concepts and describe the benefits of predictive analysis Understand primary data mining tasks, and describe the key steps of a data mining process Use the Oracle Data Miner to build,evaluate, and apply multiple data mining models Use Oracle Data Mining's predictions and insights to address many kinds of business problems, including: Predict individual behavior, Predict values, Find co-occurring events Learn how to deploy data mining results for real-time access by end-users Five reasons why you should attend this 2 day Oracle Data Mining Oracle University course. With Oracle Data Mining, a component of the Oracle Advanced Analytics Option, you will learn to gain insight and foresight to: Go beyond simple BI and dashboards about the past. This course will teach you about "data mining" and "predictive analytics", analytical techniques that can provide huge competitive advantage Take advantage of your data and investment in Oracle technology Leverage all the data in your data warehouse, customer data, service data, sales data, customer comments and other unstructured data, point of sale (POS) data, to build and deploy predictive models throughout the enterprise. Learn how to explore and understand your data and find patterns and relationships that were previously hidden Focus on solving strategic challenges to the business, for example, targeting "best customers" with the right offer, identifying product bundles, detecting anomalies and potential fraud, finding natural customer segments and gaining customer insight.

    Read the article

  • Why is HTML/Javascript minification beneficial

    - by Channel72
    Why is HTML/Javascript minification beneficial when the HTTP protocol already supports gzip data compression? I realize that Javascript/HTML minification has the potential to significantly reduce the size of Javascript/HTML files by removing unnecessary whitespace, and perhaps renaming variables to a few letters each, but doesn't the LZW algorithm do especially well when there are many repeated characters (e.g. lots of whitespace?) I realize that some Javascript minification tools do more than just reduce size. Google's closure compiler, for example, also tries to improve code performance by inlining functions and doing other analyses. But the primary purpose of Javascript minification is usually to reduce file size. I also realize there are other reasons you might want to minify aside from performace, such as code obfuscation. But again, that reason is not usually emphasized as much as performance gain and file size reduction. For example, Closure Compiler is not advertised as an obfuscation tool, but as a code size reducer and download-speed enhancer. So, how much performance do you really gain from Javascript/HTML minification when you're already significantly reducing file size with gzip compression?

    Read the article

  • Ubuntu: The Movie

    - by CYREX
    Since Ubuntu is the most popular distribution and has made a lot of changes in many places around the globe and in different industries up to the point where even people that do not know what Linux is, they know what Ubuntu is (go figure? ) there might be a movie coming someday (like the social network for Facebook or Revolution OS for Linux/Red Hat) i wanted to know how it all came to be from the actual players in the show. UBUNTU: The Movie Since i have seen several of the primary characters of the movie here, this might be a good place to start on how it all came to be. Not in the traditional wikipedia way or the ubuntu help section, but in the what the actual developers have in mind on how it all went down to the point of having a huge amount of users, an incredible level sophistication in the forum, help sections, installers, etc.. This is just to have the KNOW HOW before the actual movie makes it out some day in the future. As a fan of Ubuntu this is a MOST KNOW! ;) Hope i made some people happy and some other shy hehe.

    Read the article

  • Problems after bumblebee installation

    - by Samuel
    I tried to install bumblebee on Ubuntu 12.04 LTS by following steps on ubuntuwiki site. But when i used this code: sudo add-apt-repository ppa:bumblebee/stable && sudo apt-get update this output came out: Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.q0zzLiXVT3 --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 46C0364A882F14F899448FFCB22A95F88110A93A gpg: requesting key 8110A93A from hkp server keyserver.ubuntu.com gpg: key 8110A93A: "Launchpad PPA for Bumlebee Project" not changed gpg: Total number processed: 1 gpg: unchanged: 1 E: Type 'ain' is not known on line 3 in source list /etc/apt/sources.list.d/bumblebee-stable-precise.list E: The list of sources could not be read. There´s also the same problem message when I try to run the update center. ´E:Type´ain´ is not known on line 3 in source list /etc/apt/sources.list.d/bumblebee-stable-precise.list, E:The list of sources could not be read., E:The package lists or status file could not be parsed or opened.´ I don´t know what to do since I´m a newbie at Linux. Thanks in advance, Samuel.

    Read the article

  • Other games that employ mechanics like the game "Diplomacy"

    - by Kevin Peno
    I'm doing a little bit of research and I'm hoping you can help me track down any games, other than Diplomacy (online version here), that employ all or some of the mechanics in Diplomacy (rules, short form). Examples I'm looking for: Simultaneous orders given prior to execution of orders In Diplomacy, players "write down" their moves and execute them "at the same time" Support, in terms of supporting an attacker or defender "take" a territory. In Diplomacy, no one unit is stronger than another you need to combine the strength of multiple units to attack other territories. Rules for how move conflicts are resolved Example, 2 units move into a space, but only one is allowed, what happens. I may add to this list later, but these are the primary things I'm looking for. If you need clarification on anything just let me know. Note: I tried asking this on GamingSE, but it was shot down. So, I am unsure where else I could post this. Since I am researching this for game development purposes, I assume this post is on topic. Please let me know if this is not the case. Please also feel free to re-categorize this. Thanks!

    Read the article

  • Change Data Capture

    - by Ricardo Peres
    There's an hidden gem in SQL Server 2008: Change Data Capture (CDC). Using CDC we get full audit capabilities with absolutely no implementation code: we can see all changes made to a specific table, including the old and new values! You can only use CDC in SQL Server 2008 Standard or Enterprise, Express edition is not supported. Here are the steps you need to take, just remember SQL Agent must be running: use SomeDatabase; -- first create a table CREATE TABLE Author ( ID INT NOT NULL PRIMARY KEY IDENTITY(1, 1), Name NVARCHAR(20) NOT NULL, EMail NVARCHAR(50) NOT NULL, Birthday DATE NOT NULL ) -- enable CDC at the DB level EXEC sys.sp_cdc_enable_db -- check CDC is enabled for the current DB SELECT name, is_cdc_enabled FROM sys.databases WHERE name = 'SomeDatabase' -- enable CDC for table Author, all columns exec sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'Author', @role_name = null -- insert values into table Author insert into Author (Name, EMail, Birthday, Username) values ('Bla', 'bla@bla', 1990-10-10, 'bla') -- check CDC data for table Author -- __$operation: 1 = DELETE, 2 = INSERT, 3 = BEFORE UPDATE 4 = AFTER UPDATE -- __$start_lsn: operation timestamp select * from cdc.dbo_author_CT -- update table Author update Author set EMail = '[email protected]' where Name = 'Bla' -- check CDC data for table Author select * from cdc.dbo_author_CT -- delete from table Author delete from Author -- check CDC data for table Author select * from cdc.dbo_author_CT -- disable CDC for table Author -- this removes all CDC data, so be carefull exec sys.sp_cdc_disable_table @source_schema = 'dbo', @source_name = 'Author', @capture_instance = 'dbo_Author' -- disable CDC for the entire DB -- this removes all CDC data, so be carefull exec sys.sp_cdc_disable_db SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.all();

    Read the article

  • Trying to format drive fails

    - by david
    since I will be doing an internship for which i need to use Windows software, I have decided to ruin my day trying to remove my Ubuntu 12.04, install Win XP SP3 (since the DualBoot theme from ubuntu suggests to first install Windows and then Ubuntu, for problems with the bootloader if you do it the other way around) and then reinstall Ubuntu 12.04 since I would like to keep using it as my primary operating system, using WinXP exclusively for the internship. Other than that, I would like to have a partition for the data, which can be used by both Ubuntu and Windows. So now, I have used the disk utility run from an ubuntu-live cd to format my drive with Master Boot Record (being conscious of the fact that this way I will lose all my data, which I have saved on an external drive before, and that my Ubuntu won't work anymore afterwards), creating partitions for Windows (NTFS), personal data (FAT, since as far as I know both Ubuntu and Windows can deal with this), a Swap partition for Linux, and one partition for Ubuntu (ext4); trying to install Win XP from cd gives me a blue screen, which stops the setup and telling me to remove all recently installed drives and to run CHKDSK. So I thought, that maybe Windows doesn't like pre-partitioned drives for its installation and thus I need to re-format my hard drive in order to have a completely "new" drive, which I can then, during the Windows-installation, partition in order to create the partitions I need. Trying to do this, though, the disk-utility run from the live-CD gives me this warning: Error creating partition table: helper exited with exit code 1: In part_create_partition_table: device_file=/dev/sda, scheme=0 got it got disk committed to disk BLKRRPART ioctl failed for /dev/sda: Device or resource busy I do not understand why it tells me that the hard-drive is busy, because, as stated above, I am doing all this from a live-CD. Thus, my questions are: How can I resolve the error given by the disk utility? Does it make sense to use four partitions in the way mentioned above? And if not so, which partitions should I create? Can I, theoretically, partition my drive from an Ubuntu live-cd in order to create the partitions I want and to install first Windows and then Ubuntu? Thanks for any help, David

    Read the article

  • SQL SERVER – Contest – Summary of 5 Day and Additional Information

    - by pinaldave
      I am overwhelmed with the response of our contest ran earlier this week. Every day we are giving away USD 198 worth give aways to readers in USA and India. If you have not participated so far, I encourage you to participate today itself. Here are links to our 5 day contest. The winner of the contest will be announced on August 20th. Query Hint – Contest Win Joes 2 Pros Combo (USD 198) – Day 1 of 5 Identity Fields – Contest Win Joes 2 Pros Combo (USD 198) – Day 2 of 5 Clustered Index and Primary Key – Contest Win Joes 2 Pros Combo (USD 198) – Day 3 of 5 Expanding Views – Contest Win Joes 2 Pros Combo (USD 198) – Day 4 of 5 Understanding XML – Contest Win Joes 2 Pros Combo (USD 198) – Day 5 of 5 Here are a few important notes related to the contest. A few people asked me what should they do as they have forgotten to mention their country in the response. Please resubmit with correct data, we will only consider latest entry from one person. What if you are not from the USA or India? Participate in the Bonus Quiz. Leave a comment for each of the questions above with your favorite article and you may be eligible for winning something cool. What if I am winner of two contests out of 5 contests? Well, in that case, we will send you one set of Combo Kit and Amazon Gift Card of USD 100 for another contest which you won. Can I exchange my kit with other stuff? No, if you do not want kit, give it to someone who needs it. Btw, I strongly suggest that you participate in the Bonus Quiz. There is something cool for everyone! Reference: Pinal Dave (http://blog.sqlauthority.com)         Filed under: Database, DBA, Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • AMR's 2010 Supply Chain Top 25 Report: Early Predictions

    - by [email protected]
    On April 6th, AMR's Debra Hoffman and Kevin O'Marah presented their annual 'Top 25 Supply Chain' predictions.  For supply chain professionals, it was a 'must-hear' event especially with the new focus on both operational excellence as well as innovation excellence.  Most people think of R&D as the primary driver for innovation, but in today's 'new-normal' firms need to constantly review, evaluate and update their workflow procedures and business processes to maintian a sharp-blade on the leading edge.  Having the right tools in place to be able to monitor supply chain effectiveness becomes paramount to firms as they compete in the global marketplace. Organizations need  user-friendly and role based dashboards with early alerts to contextualize activities and post the best-options for managers to make better and more informed decisions. 2009 Winners were 1.Apple 2.Dell 3.P&G 4.IBM 5.Cisco 6.Nokia 7. Walmart 8.Samsung 9.PepsiCo 10.Toyota 11.Schulmberger 12. J&J 13.Coke 14. Nike 15.Tesco 16.Disney 17.HP 18.TI 19.LockheedMartin 20.Colgate 21.BestBuy 22.Unilever 23.Publix 24.SonyEricsson 25.Intel    

    Read the article

  • UI message passing programming paradigm

    - by Ronald Wildenberg
    I recently (about two months ago) read an article that explained some user interface paradigm that I can't remember the name of and I also can't find the article anymore. The paradigm allows for decoupling the user interface and backend through message passing (via some queueing implementation). So each user action results in a message being pased to the backend. The user interface is then updated to inform the user that his request is being processed. The assumption is that a user interface is stale by definition. When you read data from some store into memory, it is stale because another transaction may be updating the same data already. If you assume this, it makes no sense to try to represent the 'current' database state in the user interface (so the delay introduced by passing messages to a backend doesn't matter). If I remember correctly, the article also mentioned a read-optimized data store for rendering the user interface. The article assumed a high-traffic web application. A primary reason for using a message queue communicating with the backend is performance: returning control to the user as soon as possible. Updating backend stores is handled by another process and eventually these changes also become visible to the user. I hope I have explained accurately enough what I'm looking for. If someone can provide some pointers to what I'm looking for, thanks very much in advance.

    Read the article

  • when to introduce an application services tier in an n-tier application

    - by user20358
    I am developing a web based application whose primary objective is to fetch data from the database, display it on the UI, take in user inputs and write them back to the database. The application is not going to be doing any industrial strength algorithm crunching, but will be receiving a very high number of hits at peak times (described below) which will be changing thru the day. The layers are your typical Presentation, Business, Data. The data layer is taken care of by the database server. The business layer will contain the DAL component to access the database server over tcp. The choices I have to separate these layers into tiers are: The presentation and business layers can be either kept on the same tier. The presentation layer on a separate tier by itself and the business layer on a separate tier by itself. In the case of choice 2, the business layer will be accessed by the presentation layer using a WCF service either over http or tcp. I don't see any heavy processing being done on the Business layer, so I am leaning towards option 1 above. I also feel for the same reason, adding a new tier will only introduce the network latency. However, in terms of scalability in case I need to scale up or scale out, which is a better way to go? This application will need to be able to support up to 6 million users an hour. There will be a reasonable amount of data in each user session, storing user's preferences and other details. I will be using page level caching as well.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >