Search Results

Search found 22803 results on 913 pages for 'customer support sr'.

Page 848/913 | < Previous Page | 844 845 846 847 848 849 850 851 852 853 854 855  | Next Page >

  • Bridging Two Worlds: Big Data and Enterprise Data

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The big data world is all the vogue in today’s IT conversations. It’s a world of volume, velocity, variety – tantalizing us with its untapped potential. It’s a world of transformational game-changing technologies that have already begun to alter the information management landscape. One of the reasons that big data is so compelling is that it’s a universal challenge that impacts every one of us. Whether it is healthcare, financial, manufacturing, government, retail - big data presents a pressing problem for many industries: how can so much information be processed so quickly to deliver the ‘bigger’ picture? With big data we’re tapping into new information that didn’t exist before: social data, weblogs, sensor data, complex content, and more. What also makes big data revolutionary is that it turns traditional information architecture on its head, putting into question commonly accepted notions of where and how data should be aggregated processed, analyzed, and stored. This is where Hadoop and NoSQL come in – new technologies which solve new problems for managing unstructured data. And now for some worst practices that I'd recommend that you please not follow: Worst Practice Lesson 1: Throw away everything that you already know about data management, data integration tools, and start completely over. One shouldn’t forget what’s already running in today’s IT. Today’s Business Analytics, Data Warehouses, Business Applications (ERP, CRM, SCM, HCM), and even many social, mobile, cloud applications still rely almost exclusively on structured data – or what we’d like to call enterprise data. This dilemma is what today’s IT leaders are up against: what are the best ways to bridge enterprise data with big data? And what are the best strategies for dealing with the complexities of these two unique worlds? Worst Practice Lesson 2: Throw away all of your existing business applications … because they don’t run on big data yet. Bridging the two worlds of big data and enterprise data means considering solutions that are complete, based on emerging Hadoop technologies (as well as traditional), and are poised for success through integrated design tools, integrated platforms that connect to your existing business applications, as well as and support real-time analytics. Leveraging these types of best practices translates to improved productivity, lowered TCO, IT optimization, and better business insights. Worst Practice Lesson 3: Separate out [and keep separate] your big data sandboxes from all the current enterprise IT systems. Don’t mix sand among playgrounds. We didn't tell you that you wouldn't get dirty doing this. Correlation between the two worlds is key. The real advantage to analyzing big data comes when you can correlate it with the existing data in your data warehouse or your current applications to make sense of the larger patterns. If you have not followed these worst practices 1-3 then you qualify for the first step of our journey: bridging the two worlds of enterprise data and big data. Over the next several weeks we’ll be discussing this topic along with several others around big data as it relates to data integration. We welcome you to join us in the conversation by following us on twitter on #BridgingBigData or download our latest white paper and resource kit: Big Data and Enterprise Data: Bridging Two Worlds.

    Read the article

  • Problem with script substitution when running script

    - by tucaz
    Hi! I'm new to Linux so this probably should be an easy fix, but I cannot see it. I have a script downloaded from official sources that is used to install additional tools for fsharp but it gives me a syntax error when running it. I tried to replace ( and ) by { and } but eventually it lead me to another error so I think this is not the problem since the script works for everybody. I read some articles that say that my bash version maybe is not the right one. I'm using Ubuntu 10.10 and here is the error: install-bonus.sh: 28: Syntax error: "(" unexpected (expecting "}") And this is line 27, 28 and 29: { declare -a DIRS=("${!3}") FILE=$2 And the full script: #! /bin/sh -e PREFIX=/usr BIN=$PREFIX/bin MAN=$PREFIX/share/man/man1/ die() { echo "$1" &2 echo "Installation aborted." &2 exit 1 } echo "This script will install additional material for F# including" echo "man pages, fsharpc and fsharpi scripts and Gtk# support for F#" echo "Interactive (root access needed)" echo "" # ------------------------------------------------------------------------------ # Utility function that searches specified directories for a specified file # and if the file is not found, it asks user to provide a directory RESULT="" searchpaths() { declare -a DIRS=("${!3}") FILE=$2 DIR=${DIRS[0]} for TRYDIR in ${DIRS[@]} do if [ -f $TRYDIR/$FILE ] then DIR=$TRYDIR fi done while [ ! -f $DIR/$FILE ] do echo "File '$FILE' was not found in any of ${DIRS[@]}. Please enter $1 installation directory:" read DIR done RESULT=$DIR } # ------------------------------------------------------------------------------ # Locate F# installation directory - this is needed, because we want to # add environment variable with it, generate 'fsharpc' and 'fsharpi' and also # copy load-gtk.fsx to that directory # ------------------------------------------------------------------------------ PATHS=( $1 /usr/lib/fsharp /usr/lib/shared/fsharp ) searchpaths "F# installation" FSharp.Core.dll PATHS[@] FSHARPDIR=$RESULT echo "Successfully found F# installation directory." # ------------------------------------------------------------------------------ # Check that we have everything we need # ------------------------------------------------------------------------------ [ $(id -u) -eq 0 ] || die "Please run the script as root." which mono /dev/null || die "mono not found in PATH." # ------------------------------------------------------------------------------ # Make sure that all additional assemblies are in GAC # ------------------------------------------------------------------------------ echo "Installing additional F# assemblies to the GAC" gacutil -i $FSHARPDIR/FSharp.Build.dll gacutil -i $FSHARPDIR/FSharp.Compiler.dll gacutil -i $FSHARPDIR/FSharp.Compiler.Interactive.Settings.dll gacutil -i $FSHARPDIR/FSharp.Compiler.Server.Shared.dll # ------------------------------------------------------------------------------ # Install additional files # ------------------------------------------------------------------------------ # Install man pages echo "Installing additional F# commands, scripts and man pages" mkdir -p $MAN cp *.1 $MAN # Export the FSHARP_COMPILER_BIN environment variable if [[ ! "$OSTYPE" =~ "darwin" ]]; then echo "export FSHARP_COMPILER_BIN=$FSHARPDIR" fsharp.sh mv fsharp.sh /etc/profile.d/ fi # Generate 'load-gtk.fsx' script for F# Interactive (ask user if we cannot find binaries) PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/gtk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Gtk#" gtk-sharp.dll PATHS[@] GTKDIR=$RESULT echo "Successfully found Gtk# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/glib-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Glib" glib-sharp.dll PATHS[@] GLIBDIR=$RESULT echo "Successfully found Glib# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/atk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Atk#" atk-sharp.dll PATHS[@] ATKDIR=$RESULT echo "Successfully found Atk# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/gdk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Gdk#" gdk-sharp.dll PATHS[@] GDKDIR=$RESULT echo "Successfully found Gdk# root directory." cp bonus/load-gtk.fsx load-gtk1.fsx sed "s,INSERTGTKPATH,$GTKDIR,g" load-gtk1.fsx load-gtk2.fsx sed "s,INSERTGDKPATH,$GDKDIR,g" load-gtk2.fsx load-gtk3.fsx sed "s,INSERTATKPATH,$ATKDIR,g" load-gtk3.fsx load-gtk4.fsx sed "s,INSERTGLIBPATH,$GLIBDIR,g" load-gtk4.fsx load-gtk.fsx rm load-gtk1.fsx rm load-gtk2.fsx rm load-gtk3.fsx rm load-gtk4.fsx mv load-gtk.fsx $FSHARPDIR/load-gtk.fsx # Generate 'fsharpc' and 'fsharpi' scripts (using the F# path) # 'fsharpi' automatically searches F# root directory (e.g. load-gtk) echo "#!/bin/sh" fsharpc echo "exec mono $FSHARPDIR/fsc.exe --resident \"\$@\"" fsharpc chmod 755 fsharpc echo "#!/bin/sh" fsharpi echo "exec mono $FSHARPDIR/fsi.exe -I:\"$FSHARPDIR\" \"\$@\"" fsharpi chmod 755 fsharpi mv fsharpc $BIN/fsharpc mv fsharpi $BIN/fsharpi Thanks a lot!

    Read the article

  • Jersey non blocking client

    - by Pavel Bucek
    Although Jersey already have support for making asynchronous requests, it is implemented by standard blocking way - every asynchronous request is handled by one thread and that thread is released only after request is completely processed. That is OK for lots of cases, but imagine how that will work when you need to do lots of parallel requests. Of course you can limit (and its really wise thing to do, you do want control your resources) number of threads used for asynchronous requests, but you'll get another maybe not pleasant consequence - obviously processing time will incerase. There are few projects which are trying to deal with that problem, commonly named as async http clients. I didn't want to "re-implement a wheel" and I decided I'll use AHC - Async Http Client made by Jeanfrancois Arcand. There is also interesting implementation from Apache - HttpAsyncClient, but it is still in "very early stages of development" and others haven't been in similar or better shape as AHC. How this works? Non-blocking clients allow users to make same asynchronous requests as we can do with standard approach but implementation is different - threads are better utilized, they don't spend most of time in idle state. Simply described - when you make a request (send it over the network), you are waiting for reply from other side. And there comes main advantage of non-blocking approach - it uses these threads for further work, like making other requests or processing responses etc.. Idle time is minimized and your resources (threads) will be far better used. Who should consider using this? Everyone who is making lots of asynchronous requests. I haven't done proper benchmark yet, but some simple dumb tests are showing huge improvement in cases where lots of concurrent asynchronous requests are made in short period. Last but not least - this module is still experimental, so if you don't like something or if you have ideas for improvements/any feedback, feel free to comment this blog post, send mail to [email protected] or contact me personally. All feedback is greatly appreciated! maven dependency (will be present in java.net maven 2 repo by the end of the day): link: http://download.java.net/maven/2/com/sun/jersey/experimental/jersey-non-blocking-client <dependency> <groupId>com.sun.jersey.experimental</groupId> <artifactId>jersey-non-blocking-client</artifactId> <version>1.9-SNAPSHOT</version> </dependency> code snippet: ClientConfig cc = new DefaultNonBlockingClientConfig(); cc.getProperties().put(NonBlockingClientConfig.PROPERTY_THREADPOOL_SIZE, 10); // default value, feel free to change Client c = NonBlockingClient.create(cc); AsyncWebResource awr = c.asyncResource("http://oracle.com"); Future<ClientResponse> responseFuture = awr.get(ClientResponse.class); // or awr.get(new TypeListener<ClientResponse>(ClientResponse.class) { @Override public void onComplete(Future<ClientResponse> f) throws InterruptedException { ... } }); javadoc (temporary location, won't be updated): http://anise.cz/~paja/jersey-non-blocking-client/

    Read the article

  • New release of Microsoft All-In-One Code Framework is available for download - March 2011

    - by Jialiang
    A new release of Microsoft All-In-One Code Framework is available on March 8th. Download address: http://1code.codeplex.com/releases/view/62267#DownloadId=215627 You can download individual code samples or browse code samples grouped by technology in the updated code sample index. If it’s the first time that you hear about Microsoft All-In-One Code Framework, please read this Microsoft News Center article http://www.microsoft.com/presspass/features/2011/jan11/01-13codeframework.mspx, or watch the introduction video on YouTube http://www.youtube.com/watch?v=cO5Li3APU58, or read the introduction on our homepage http://1code.codeplex.com/. -------------- New Silverlight code samples CSSLTreeViewCRUDDragDrop Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215808 The code sample was created by Amit Dey. It demonstrates a custom TreeView with added functionalities of CRUD (Create, Read, Update, Delete) and drag-and-drop operations. Silverlight TreeView control with CRUD and drag & drop is a frequently asked programming question in Silverlight  forums. Many customers also requested this code sample in our code sample request service. We hope that this sample can reduce developers' efforts in handling this typical programming scenario. The following blog article introduces the sample in detail: http://blogs.msdn.com/b/codefx/archive/2011/02/15/silverlight-treeview-control-with-crud-and-drag-amp-drop.aspx. CSSL4FileDragDrop and VBSL4FileDragDrop Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215809 http://1code.codeplex.com/releases/view/62253#DownloadId=215810 The code sample demonstrates the new drag&drop feature of Silverlight 4 to implement dragging picures from the local file system to a Silverlight application.   Sometimes we want to change SiteMapPath control's titles and paths according to Query String values. And sometimes we want to create the SiteMapPath dynamically. This code sample shows how to achieve these goals by handling SiteMap.SiteMapResolve event. CSASPNETEncryptAndDecryptConfiguration, VBASPNETEncryptAndDecryptConfiguration Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215027 http://1code.codeplex.com/releases/view/62253#DownloadId=215106 In this sample, we encrypt and decrypt some sensitive information in the config file of a web application by using the RSA asymmetric encryption. This project contains two snippets. The first one demonstrates how to use RSACryptoServiceProvider to generate public key and the corresponding private key and then encrypt/decrypt string value on page. The second part shows how to use RSA configuration provider to encrypt and decrypt configuration section in web.config of web application. connectionStrings section in plain text: Encrypted connectionString:  Note that if you store sensitive data in any of the following configuration sections, we cannot encrypt it by using a protected configuration provider <processModel> <runtime> <mscorlib> <startup> <system.runtime.remoting> <configProtectedData> <satelliteassemblies> <cryptographySettings> <cryptoNameMapping> CSASPNETFileUploadStatus Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215028 I believe ASP.NET programmers will like this sample, because in many cases we need customers know the current status of the uploading files, including the upload speed and completion percentage and so on. Under normal circumstances, we need to use COM components to accomplish this function, such as Flash, Silverlight, etc. The uploading data can be retrieved in two places, the client-side and the server-side. For the client, for the safety factors, the file upload status information cannot be got from JavaScript or server-side code, so we need COM component, like Flash and Silverlight to accomplish this, I do not like this approach because the customer need to install these components, but also we need to learn another programming framework. For the server side, we can get the information through coding, but the key question is how to tell the client results. In this case, We will combine custom HTTPModule and AJAX technology to illustrate how to analyze the HTTP protocol, how to break the file request packets, how to customize the location of the server-side file caching, how to return the file uploading status back to the client and so on . CSASPNETHighlightCodeInPage, VBASPNETHighlightCodeInPage Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215029 http://1code.codeplex.com/releases/view/62253#DownloadId=215108 This sample imitates a system that needs display the highlighted code in an ASP.NET page . As a matter of fact, sometimes we input code like C# or HTML in a web page and we need these codes to be highlighted for a better reading experience. It is convenient for us to keep the code in mind if it is highlighted. So in this case, the sample shows how to highlight the code in an ASP.NET page. It is not difficult to highlight the code in a web page by using String.Replace method directly. This  method can return a new string in which all occurrences of a specified string in the current instance are replaced with another specified string. However, it may not be a good idea, because it's not extremely fast, in fact, it's pretty slow. In addition, it is hard to highlight multiple keywords by using String.Replace method directly. Sometimes we need to copy source code from visual studio to a web page, for readability purpose, highlight the code is important while set the different types of keywords to different colors in a web page by using String.Replace method directly is not available. To handle this issue, we need to use a hashtable variable to store the different languages of code and their related regular expressions with matching options. Furthermore, define the css styles which used to highlight the code in a web page. The sample project can auto add the style object to the matching string of code. A step-by-step guide illustrating how to highlight the code in an ASP.NET page: 1. the HighlightCodePage.aspx page Choose a type of language in the dropdownlist control and paste the code in the textbox control, then click the HighLight button. 2.  Display the highlighted code in an ASP.NET page After user clicks the HighLight button, the highlighted code will be displayed at right side of the page.        CSASPNETPreventMultipleWindows Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215032 This sample demonstrates a step-by-step guide illustrating how to detect and prevent multiple windows or tab usage in Web Applications. The sample imitates a system that need to prevent multiple windows or tabs to solve some problems like sharing sessions, protect duplicated login, data concurrency, etc. In fact, there are many methods achieving this goal. Here we give a solution of use JavaScript, Sample shows how to use window.name property check the correct links and throw other requests to invalid pages. This code-sample use two user controls to make a distinction between base page and target page, user only need drag different controls to appropriate web form pages. so user need not write repetitive code in every page, it will make coding work lightly and convenient for modify your code.  JSVirtualKeyboard Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215093 This article describes an All-In-One framework sample that demonstrates a step-by-step guide illustrating how to build a virtual keyboard in your HTML page. Sometimes we may need to offer a virtual keyboard to let users input something without their real keyboards. This scenario often occurs when users will enter their password to get access to our sites and we want to protect the password from some kinds of back-door software, a Key-logger for example, and we will find a virtual keyboard on the page will be a good choice here. To create a virtual keyboard, we firstly need to add some buttons to the page. And when users click on a certain button, the JavaScript function handling the onclick event will input an appropriated character to the textbox. That is the simple logic of this feature. However, if we indeed want a virtual keyboard to substitute for the real keyboard completely, we will need more advanced logic to handle keys like Caps-Lock and Shift etc. That will be a complex work to achieve. CSASPNETDataListImageGallery Download: http://1code.codeplex.com/releases/view/62261#DownloadId=215267 This code sample demonstrates how to create an Image Gallery application by using the DataList control in ASP.NET. You may find the Image Gallery is widely used in many social networking sites, personal websites and E-Business websites. For example, you may use the Image Gallery to show a library of personal uploaded images on a personal website. Slideshow is also a popular tool to display images on websites. This code sample demonstrates how to use the DataList and ImageButton controls in ASP.NET to create an Image Gallery with image navigation. You can click on a thumbnail image in the Datalist control to display a larger version of the image on the page. This sample code reads the image paths from a certain directory into a FileInfo array. Then, the FileInfo array is used to populate a custom DataTable object which is bound to the Datalist control. This code sample also implements a custom paging system that allows five images to be displayed horizontally on one page. The following link buttons are used to implement a custom paging system:   •     First •     Previous •     Next •     Last Note We recommend that you use this method to load no more than five images at a time. You can also set the SelectedIndex property for the DataList control to limit the number of the thumbnail images that can be selected. To indicate which image is selected, you can set the SelectedStyle property for the DataList control. VBASPNETSearchEngine Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215112 This sample shows how to implement a simple search engine in an ASP.NET web site. It uses LIKE condition in SQL statement to search database. Then it highlights keywords in search result by using Regular Expression and JavaScript. New Windows General code samples CSCheckEXEType, VBCheckEXEType Downloads: http://1code.codeplex.com/releases/view/62253#DownloadId=215045 http://1code.codeplex.com/releases/view/62253#DownloadId=215120 The sample demonstrates how to check an executable file type.  For a given executable file, we can get 1 whether it is a console application 2 whether it is a .Net application 3 whether it is a 32bit native application. 4 The full display name of a .NET application, e.g. System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL New Internet Explorer code samples CSIEExplorerBar, VBIEExplorerBar Downloads: http://1code.codeplex.com/releases/view/62253#DownloadId=215060 http://1code.codeplex.com/releases/view/62253#DownloadId=215133 The sample demonstrates how to create and deploy an IE Explorer Bar which could list all the images in a web page. CSBrowserHelperObject, VBBrowserHelperObject Downloads: http://1code.codeplex.com/releases/view/62253#DownloadId=215044 http://1code.codeplex.com/releases/view/62253#DownloadId=215119 The sample demonstrates how to create and deploy a Browser Helper Object,  and the BHO in this sample is used to disable the context menu in IE. New Windows Workflow Foundation code samples CSWF4ActivitiesCorrelation Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215085 Consider that there are two such workflow instances:       start                                   start          |                                           | Receive activity      Receive activity         |                                           | Receive2 activity      Receive2 activity         |                                           | A WCF request comes to call the second Receive2 activity. Which one should take care of the request? The answer is Correlation. This sample will show you how to correlate two workflow service to work together. -------------- New ASP.NET code samples CSASPNETBreadcrumbWithQueryString Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215022

    Read the article

  • How to create (via installer script) a task that will install my bash script so it runs on DE startup?

    - by MountainX
    I've been reading for the last couple hours about Upstart, .xinitrc, .xsessions, rc.local, /etc/init.d/, /etc/xdg/autostart, @reboot in crontab and so many other things that I'm totally confused! Here is my bash script. It should start/run after the desktop environment is started and it should continue to run at all times until logout/shutdown. It should start again on reboot. Any time the DE is running, it should run. #!/bin/bash while true; do if [[ -s ~/.updateNotification.txt ]]; then read MSG < ~/.updateNotification.txt kdialog --title 'The software has been updated' --msgbox "$MSG" cat /dev/null > ~/.updateNotification.txt fi sleep 3600 done exit 0 I know zero about using Upstart, but I understand that Upstart is one way to handle this. I'll consider other approaches but most of the things I've been reading about are too complex for me. Furthermore, I can't figure out which approach will meet my requirements (which I'll detail below). There are two steps in my question: How to automatically start the script above, as described above. How to "install" that Upstart task via a bash script (i.e., my "installer"). I assume (or hope) that step 2 is almost trivial once I understand step 1. I have to support all flavors of Ubuntu desktops. Therefore, the kdialog call above will be replaced. I'm considering easybashgui for this. (Or I could use zenity on gnome DE's.) My requirements are: The setup process (installation) must be done via a bash script. I cannot use the GUI method described in the Ubuntu doc AddingProgramToSessionStartup, for example. I must be able to script/automate the setup (installing) process using bash. Currently, it is as simple as having the bash installer script copy the above script into /home/$USER/.kde/Autostart/ The setup process must be universal across Ubuntu derivatives including Unity and KDE and gnome desktops. The same setup script (installer) should run on Linux Mint, Kubuntu, Xbuntu (basically any flavor of Ubuntu and major derivatives such as Linux Mint). For example, we cannot continue to put a script file in /home/$USER/.kde/Autostart/ because that exists only on KDE. The above script should work for each of the limited flavors we use. Hence our interest in using easybashgui instead of kdialog or zenity. See below. The installed monitoring script should only be started after the desktop is started since it will display a GUI message to the user if the update is found. The monitoring script (above) should run without root privileges, of course. But the installer (bash script) can be run as root. I'm not a real developer or a sysadmin. This is a part time volunteer thing for me, so it needs to be easy/simple. I can write bash scripts and I can program a little, but I know nothing about Upstart or systemd, for example. And, unfortunately, my job doesn't give me time to become an expert on init systems or much of anything else related to development and sysadmin. So I have to stick with simple solutions. The easybashgui version of the script might look like this: #!/bin/bash source easybashgui while true; do if [[ -s ~/.updateNotification.txt ]]; then read MSG < ~/.updateNotification.txt message "$MSG" cat /dev/null > ~/.updateNotification.txt fi sleep 3600 done exit 0

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves for October 20-26, 2013

    - by OTN ArchBeat
    Here's this week's list of the Top 10 items shared on the OTN ArchBeat Facebook Page from October 27 - November 2, 2013. Visualizing and Process (Twitter) Events in Real Time with Oracle Coherence | Noah Arliss This OTN Virtual Developer Day session explores in detail how to create a dynamic HTML5 Web application that interacts with Oracle Coherence as it’s processing events in real time, using the Avatar project and Oracle Coherence’s Live Events feature. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! HTML5 Application Development with Oracle WebLogic Server | Doug Clarke This free OTN Virtual Developer Day session covers the support for WebSockets, RESTful data services, and JSON infrastructure available in Oracle WebLogic Server. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! Video: ADF BC and REST services | Frederic Desbiens Spend a few minutes with Oracle ADF principal product manager Frederic Desbiens and learn how to publish ADF Business Components as RESTful web services. One Client Two Clusters | David Felcey "Sometimes its desirable to have a client connect to multiple clusters, either because the data is dispersed or for instance the clusters are in different locations for high availability," says David Felcey. David shows you how in this post, which includes a simple example. Exceptions Handling and Notifications in ODI | Christophe Dupupet Oracle Fusion Middleware A-Team director Christophe Dupupet reviews the techniques that are available in Oracle Data Integrator to guarantee that the appropriate individuals are notified in the event that ODI processes are impacted by network outages or other mishaps. Securing WebSocket applications on Glassfish | Pavel Bucek WebSocket is a key capability standardized into Java EE 7. Many developers wonder how WebSockets can be secured. One very nice characteristic for WebSocket is that it in fact completely piggybacks on HTTP. In this post Pavel Bucek demonstrates how to secure WebSocket endpoints in GlassFish using TLS/SSL. Oracle Coherence, Split-Brain and Recovery Protocols In Detail | Ricardo Ferreira Ricardo Ferreira's article "provides a high level conceptual overview of Split-Brain scenarios in distributed systems," focusing on a "specific example of cluster communication failure and recovery in Oracle Coherence." Non-programmatic Authentication Using Login Form in JSF (For WebCenter & ADF) | JayJay Zheng Oracle ACE JayJay Zheng shares an approach that "avoids the programmatic authentication and works great for having a custom login page developed in WebCenter Portal integrated with OAM authentication." The latest article in the Industrial SOA series looks at mobile computing and how companies are developing SOA to go. http://pub.vitrue.com/PUxT Tech Article: SOA in Real Life: Mobile Solutions The ACE Director Thing | Dr. Frank Munz Frank Munz finally gets around to blogging about achieving Oracle ACE Director status and shares some interesting insight into what will change—and what won't—thanks to that new status. A good, short read for those interested in learning more about the Oracle ACE program. Thought for the Day "Even if you're on the right track, you'll get run over if you just sit there." — Will Rogers, American humorist (November 4, 1879 – August 15, 1935) Source: brainyquote.com

    Read the article

  • The Evolution of Television and Home Entertainment

    - by Bill Evjen
    This is a group that is focused on entertainment in the aviation industry. I am attending their conference for the first time as it relates to my job at Swank Motion Pictures and what we do for our various markets. I will post my notes here. The Evolution of Television and Home Entertainment by Patrick Cosson, Veebeam TV has been the center of living rooms for sometime. Conversations and culture evolve around the TV. The way we consume this content has dramatically been changing. After TV, we had the MTV revolution of TV. It has created shorter attention spans, it made us more materialistic, narcissistic, and not easily impressed. Then we came to the Internet. The amount of content has expanded. It contains a ton of user-generated content, provides filtering, organization, distribution. We now have a problem. We are in the age of digital excess. We can access whatever we want. In conjunction with this – we are moving. The challenge we have now is curation. The trends  we see: rapid shift from scheduled to on demand consumption. A move to Internet protocols from cable Rapid fragmentation of media a transition from the TV set to a variety of screens Social connections bring mediators and amplifiers. TiVo – the shift to on demand It is because of a time-crunch Provides personal experiences Once old consumption habits are changed, there is no way back! Experiences are that people are loading up content and then bringing it with them on planes, to hotels, etc. Rapid fragmentation of media sources Many new professional content sources and channels, the rise of digital distribution, and the rise of user-generated content contribute to the wealth of content sources and abundant choice. Netflix, BBC iPlayer, hulu, Pandora, iTunes, Amazon Video, Vudu, Voddler, Spotify (these companies didn’t exist 5 years ago). People now expect this kind of consumption. People are now thinking how to deliver all these tools. Transition from the TV set to multi-screens The TV screen has traditionally been the dominant consumption screen for TV and video. Now the PC, game consoles, and various mobile devices are rapidly becoming common video devices. Multi-screens are now the norm. Social connections becoming key mediators What increasingly funnels traffic on the web, social networking enablers, will become an integral part of the discovery, consumption and sharing model for Television. The revolution will be broadcasted on Facebook and Twitter. There is business disruption There are a lot of new entrants Rapid internationalization Increasing competition from existing media players A fragmenting audience base Web browser Freedom to access any site The fight over the walled garden Most devices are not powerful enough to support a full browser PC will always be present in the living room Wireless link between PC and TV Output 1080p, plays anything, secure Key players and their challenges Services Internet media is increasingly interconnected to social media and publicly shared UGC Content delivery moving to IPTV Rights management issues are creating silos and hindering a great user experience and growth Devices Devices are becoming people’s windows into all kinds of media from all kinds of sources There won’t be a consolidation of the device landscape, rather the opposite Finding the right niche makes the most sense. We are moving to an on demand world of streaming world. People want full access to anything.

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • Access Denied

    - by Tony Davis
    When Microsoft executives wake up in the night screaming, I suspect they are having a nightmare about their own version of Frankenstein's monster. Created with the best of intentions, without thinking too hard of the long-term strategy, and having long outlived its usefulness, the monster still lives on, occasionally wreaking vengeance on the innocent. Its name is Access; a living synthesis of disparate body parts that is resistant to all attempts at a mercy-killing. In 1986, Microsoft had no database products, and needed one for their new OS/2 operating system, the successor to MSDOS. In 1986, they bought exclusive rights to Sybase DataServer, and were also intent on developing a desktop database to capture Ashton-Tate's dominance of that market, with dbase. This project, first called 'Omega' and later 'Cirrus', eventually spawned two products: Visual Basic in 1991 and Access in late 1992. Whereas Visual Basic battled with PowerBuilder for dominance in the client-server market, Access easily won the desktop database battle, with Dbase III and DataEase falling away. Access did an excellent job of abstracting and simplifying the task of building small database applications in a short amount of time, for a small number of departmental users, and often for a transient requirement. There is an excellent front end and forms generator. We not only see it in Access but parts of it also reappear in SSMS. It's good. A business user can pull together useful reports, without relying on extensive technical support. A skilled Access programmer can deliver a fairly sophisticated application, whilst the traditional client-server programmer is still sharpening his pencil. Even for the SQL Server programmer, the forms generator of Access is useful for sketching out application designs. So far, so good, but here's where the problems start; Access ties together two different products and the backend of Access is the bugbear. The limitations of Jet/ACE are well-known and documented. They range from MDB files that are prone to corruption, especially as they grow in size, pathetic security, and "copy and paste" Backups. The biggest problem though, was an infamous lack of scalability. Because Microsoft never realized how long the product would last, they put little energy into improving the beast. Microsoft 'ate their own dog food' by using Access for Microsoft Exchange and Outlook. They choked on it. For years, scalability and performance problems with Exchange Server have been laid at the door of the Jet Blue engine on which it relies. Substantial development work in Exchange 2010 was required, just in order to improve the engine and storage schema so that it more efficiently handled the reading and writing of mails. The alternative of using SQL Server just never panned out. The Jet engine was designed to limit concurrent users to a small number (10-20). When Access applications outgrew this, bitter experience proved that there really is no easy upgrade path from Access to SQL Server, beyond rewriting the whole lot from scratch. The various initiatives to do this never quite bridged the cultural gulf between Access and a true relational database So, what are the obvious alternatives for small, strategic database applications? I know many users who, for simple 'list maintenance' requirements are very happy using Excel databases. Surely, now that PowerPivot has led the way, it is time for Microsoft to offer a new RAD package for database application development; namely an Excel-based front end for SQL Server Express. In that way, we'll have a powerful and familiar front end, to a scalable database, and a clear upgrade path when an app takes off and needs to go enterprise. Cheers, Tony.

    Read the article

  • A (Late) Meme Monday Post: On SQLFamily

    - by Argenis
      Yesterday a member of the SQL community who I deeply admire sent me a DM on Twitter asking whether I had done a SQLFamily post for Thomas LaRock’s (blog|@SQLRockstar) Meme Monday for November. I replied that I did not, and I regretted not having done so. A subtle DM followed my response: “Get on it, you have all week”. And indeed I must. So here’s an attempt to express some of my feelings on a community that has catapulted my career like nothing else before I embraced it. Nanos Gigantium Humeris Insidentes I stand on the shoulders of giants. My SQLFamily has given me support at all levels. Professionally and personally. There is never a lack of will to help and provide advice to others in this community. And I do my best to help. On #SQLHelp on Twitter, via email, or even on the phone. I expect no retribution, because I know that when and if I do run into problems, my SQLFamily will be there for me. I have met some of the most humble, dedicated and most professional people in the SQL community. And some of them have pretty big titles: MVPs, MCMs, Regional Mentors, and even leaders of PASS, SQLCAT members, and even PMs and Devs on the SQL Server team. All are welcome, and that includes YOU! I have also met some people that are rather reserved and don’t participate as much in the community, for whatever reason. Be as it may, let it be know to all that we are a very welcoming community – heck, some of my closest friends and people I can count on in the community have completely opposite political views. We share one goal: to get better and help others get better. Even if you are a lurker – my hope is that one day you’ll decide to give back some of what you have learned. You have to take it to the next level On one of my previous jobs as an IT Supervisor I used to tell my team all the time about the benefits of continuous education and self-driven learning. Shortly after I left that job, the company went bankrupt and some of my staff got laid off – some without any severance pay whatsoever. I eventually found out that some of them had a really hard time finding another job, because their skills were simply outdated. They had become stale professionals. Don’t be one of them. If you don’t take advantage of these learning resources, somebody else will – and that person has an advantage over you when applying for that awesome job position that got opened. There’s a severe shortage of good DBAs and DB Devs out there. What’s your excuse for not being excellent? Even if your knowledge of SQL Server is at the beginner level, really – you have no excuse to get better. Just go to SQLUniversity and learn from there. Don’t get stale! Thank You To all of you in the SQL community who put so much time and energy into helping others, my deepest gratitude to you. I can’t wait to meet you all again at the next event and share our SQL stories over a pint of beer (or a shot of Jaeger) Cheers! -Argenis

    Read the article

  • Performing an upgrade from TFS 2008 to TFS 2010

    - by Enrique Lima
    I recently had to go through the process of migrating a TFS 2008 SP1 to a TFS 2010 environment. I will go into the details of the tasks that I went through, but first I want to explain why I define it as a migration and not an upgrade. When this environment was setup, based on support and limitations for TFS 2008, we used a 32 bit platform for the TFS Application Tier and Build Servers.  The Data Tier, since we were installing SP1 for TFS 2008, was done as a 64 bit installation.  We knew at that point that TFS 2010 was in the picture so that served as further motivation to make that a 64bit install of SQL Server.  The SQL Server at that point was a single instance (Default) installation too.  We had a pretty good strategy in place for backups of the databases supporting the environment (and this made the migration so much smoother), so we were pretty familiar with the databases and the purpose they serve. I am sure many of you that have gone through a TFS 2008 installation have encountered challenges and trials.  And likely even more so if you, like me, needed to configure your deployment for SSL.  So, frankly I was a little concerned about the process of migrating.  They say practice makes perfect, and this environment I worked on is in some way my brain child, so I was not ready nor willing for this to be a failure or something that would impact my client’s work. Prior to going through the migration process, we did the install of the environment.  The Data Tier was the same, with a new Named instance in place to host the 2010 install.  The Application Tier was in place too, and we did the DefaultCollection configuration to test and validate all components were in place as they should. Anyway, on to the tasks for the migration (thanks to Martin Hinselwood for his very thorough documentation): Close access to TFS 2008, you want to make sure all code is checked in and ready to go.  We stated a difference of 8 hours between code lock and the start of migration to give time for any unexpected delay.  How do we close access?  Stop IIS. Backup your databases.  Which ones? TfsActivityLogging TfsBuild TfsIntegration TfsVersionControl TfsWorkItemTracking TfsWorkItemTrackingAttachments Restore the databases to the new Named Instance (make sure you keep the same names) Now comes the fun part! The actual import/migration of the databases.  A couple of things happen here. The TfsIntegration database will be scanned, the other databases will be checked to validate they exist.  Those databases will go through a process of data being extracted and transferred to the TfsVersionControl database to then be renamed to Tfs_<Collection>. You will be using a tool called tfsconfig and the option import. This tool is located in the TFS 2010 installation path (C:\Program Files\Microsoft Team Foundation Server 2010\Tools),  the command to use is as follows:    tfsconfig import /sqlinstance:<instance> /collectionName:<name> /confirmed Where <instance> is going to be the SQL Server instance where you restored the databases to.  <name> is the name you will give the collection. And to explain /confirmed, well this means you have done a backup of the databases, why?  well remember you are going to merge the databases you restored when you execute the tfsconfig import command. The process will go through about 200 tasks, once it completes go to Team Foundation Server Administration Console and validate your imported databases and contents. We’ll keep this manageable, so the next post is about how to complete that implementation with the SSL configuration.

    Read the article

  • .Net Reflector 6.5 EAP now available

    - by CliveT
    With the release of CLR 4 being so close, we’ve been working hard on getting the new C# and VB language features implemented inside Reflector. The work isn’t complete yet, but we have some of the features working. Most importantly, there are going to be changes to the Reflector object model, and we though it would be useful for people to see the changes and have an opportunity to comment on them. Before going any further, we should tell you what the EAP contains that’s different from the released version. A number of bugs have been fixed, mainly bugs that were raised via the forum. This is slightly offset by the fact that this EAP hasn’t had a whole lot of testing and there may have been new bugs introduced during the development work we’ve been doing. The C# language writer has been changed to display in and out co- and contra-variance markers on interfaces and delegates, and to display default values for optional parameters in method definitions. We also concisely display values passed by reference into COM calls. However, we do not change callsites to display calls using named parameters; this looks like hard work to get right. The forthcoming version of the C# language introduces dynamic types and dynamic calls. The new version of Reflector should display a dynamic call rather than the generated C#: dynamic target = MyTestObject(); target.Hello("Mum"); We have a few bugs in this area where we are not casting to dynamic when necessary. These have been fixed on a branch and should make their way into the next EAP. To support the dynamic features, we’ve added the types IDynamicMethodReferenceExpression, IDynamicPropertyIndexerExpression, and IDynamicPropertyReferenceExpression to the object model. These types, based on the versions without “Dynamic” in the name, reflect the fact that we don’t have full information about the method that is going to be called, but only have its name (as a string). These interfaces are going to change – in an internal version, they have been extended to include information about which parameter positions use runtime types and which use compile time types. There’s also the interface, IDynamicVariableDeclaration, that can be used to determine if a particular variable is used at dynamic call sites as a target. A couple of these language changes have also been added to the Visual Basic language writer. The new features are exposed only when the optimization level is set to .NET 4. When the level is set this high, the other standard language writers will simply display a message to say that they do not handle such an optimization level. Reflector Pro now has 4.0 as an optional compilation target and we have done some work to get the pdb generation right for these new features. The EAP version of Reflector no longer installs the add-in on startup. The first time you run the EAP, it displays the integration options dialog. You can use the checkboxes to select the versions of Visual Studio into which you want to install the EAP version. Note that you can only have one version of Reflector Pro installed in Visual Studio; if you install into a Visual Studio that has another version installed, the previous version will be removed. Please try it out and send your feedback to the EAP forum.

    Read the article

  • No sound lenovo t60 alsa ad1981 iec958

    - by Nate
    Any help on getting the sound to come through my lenovo t60 build in speakers, headphones, or mic would greatly be appreciated. The three buttons to increase, decrease sound seem to work. Bios has sound card enabled and the buttons beep when pressed. When going to Utube or playing music, no sound is heard. Thanks Nate Feb 23 - Didn't see anything specific in the sys logs with Rhythmbox when connecting my ipod. Rhythmbox is playing, but still no sound. Here is the syslog details for today. Output is set to analog output. Feb 23 17:42:32 itgis01398 rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="824" x-info="http://www.rsyslog.com"] rsyslogd was HUPed, type 'lightweight'. Feb 23 17:42:33 itgis01398 rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="824" x-info="http://www.rsyslog.com"] rsyslogd was HUPed, type 'lightweight'. Feb 23 17:42:49 itgis01398 anacron[968]: Job `cron.daily' terminated Feb 23 17:42:49 itgis01398 anacron[968]: Job `cron.weekly' started Feb 23 17:42:49 itgis01398 anacron[12067]: Updated timestamp for job `cron.weekly' to 2011-02-23 Feb 23 17:42:53 itgis01398 anacron[968]: Job `cron.weekly' terminated Feb 23 17:42:53 itgis01398 anacron[968]: Normal exit (2 jobs run) Feb 23 18:01:19 itgis01398 kernel: [ 2731.324067] usb 1-5: new high speed USB device using ehci_hcd and address 3 Feb 23 18:01:19 itgis01398 kernel: [ 2731.482879] Initializing USB Mass Storage driver... Feb 23 18:01:19 itgis01398 kernel: [ 2731.483061] usb-storage 1-5:1.0: Quirks match for vid 05ac pid 1205: 10 Feb 23 18:01:19 itgis01398 kernel: [ 2731.483116] scsi6 : usb-storage 1-5:1.0 Feb 23 18:01:19 itgis01398 kernel: [ 2731.483306] usbcore: registered new interface driver usb-storage Feb 23 18:01:19 itgis01398 kernel: [ 2731.483310] USB Mass Storage support registered. Feb 23 18:01:20 itgis01398 kernel: [ 2732.481116] scsi 6:0:0:0: Direct-Access Apple iPod 1.62 PQ: 0 ANSI: 0 Feb 23 18:01:20 itgis01398 kernel: [ 2732.482466] sd 6:0:0:0: Attached scsi generic sg2 type 0 Feb 23 18:01:20 itgis01398 kernel: [ 2732.485095] sd 6:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 Feb 23 18:01:20 itgis01398 kernel: [ 2732.485110] sd 6:0:0:0: [sdb] 7999487 512-byte logical blocks: (4.09 GB/3.81 GiB) Feb 23 18:01:20 itgis01398 kernel: [ 2732.487933] sd 6:0:0:0: [sdb] Write Protect is off Feb 23 18:01:20 itgis01398 kernel: [ 2732.487941] sd 6:0:0:0: [sdb] Mode Sense: 64 00 00 08 Feb 23 18:01:20 itgis01398 kernel: [ 2732.487947] sd 6:0:0:0: [sdb] Assuming drive cache: write through Feb 23 18:01:20 itgis01398 kernel: [ 2732.489927] sd 6:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 Feb 23 18:01:20 itgis01398 kernel: [ 2732.491150] sd 6:0:0:0: [sdb] Assuming drive cache: write through Feb 23 18:01:20 itgis01398 kernel: [ 2732.491163] sdb: sdb1 sdb2 Feb 23 18:01:20 itgis01398 kernel: [ 2732.510428] sd 6:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 Feb 23 18:01:20 itgis01398 kernel: [ 2732.511288] sd 6:0:0:0: [sdb] Assuming drive cache: write through Feb 23 18:01:20 itgis01398 kernel: [ 2732.511297] sd 6:0:0:0: [sdb] Attached SCSI removable disk Feb 23 18:01:21 itgis01398 kernel: [ 2733.746675] FAT: invalid media value (0x2f) Feb 23 18:01:21 itgis01398 kernel: [ 2733.746682] VFS: Can't find a valid FAT filesystem on dev sdb1. Feb 23 18:01:22 itgis01398 upstart-udev-bridge[330]: Env must be KEY=VALUE pairs Feb 23 18:02:07 itgis01398 kernel: [ 2780.115826] sd 6:0:0:0: [sdb] Unhandled sense code Feb 23 18:02:07 itgis01398 kernel: [ 2780.115835] sd 6:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Feb 23 18:02:07 itgis01398 kernel: [ 2780.115844] sd 6:0:0:0: [sdb] Sense Key : Medium Error [current] Feb 23 18:02:07 itgis01398 kernel: [ 2780.115855] Info fld=0x0 Feb 23 18:02:07 itgis01398 kernel: [ 2780.115859] sd 6:0:0:0: [sdb] Add. Sense: Unrecovered read error Feb 23 18:02:07 itgis01398 kernel: [ 2780.115870] sd 6:0:0:0: [sdb] CDB: Read(10): 28 00 00 08 fd e9 00 00 f0 00 Feb 23 18:02:07 itgis01398 kernel: [ 2780.115892] end_request: I/O error, dev sdb, sector 589289 Feb 23 18:02:49 itgis01398 kernel: [ 2821.351464] sd 6:0:0:0: [sdb] Unhandled sense code Feb 23 18:02:49 itgis01398 kernel: [ 2821.351473] sd 6:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Feb 23 18:02:49 itgis01398 kernel: [ 2821.351482] sd 6:0:0:0: [sdb] Sense Key : Medium Error [current] Feb 23 18:02:49 itgis01398 kernel: [ 2821.351493] Info fld=0x0 Feb 23 18:02:49 itgis01398 kernel: [ 2821.351497] sd 6:0:0:0: [sdb] Add. Sense: No additional sense information Feb 23 18:02:49 itgis01398 kernel: [ 2821.351507] sd 6:0:0:0: [sdb] CDB: Read(10): 28 00 00 08 fe d9 00 00 10 00 Feb 23 18:02:49 itgis01398 kernel: [ 2821.351530] end_request: I/O error, dev sdb, sector 589529 Feb 23 18:17:01 itgis01398 CRON[12709]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) volume is all of the way up.

    Read the article

  • Windows Phone 8, possible tablets and what the latest update might mean

    - by Roger Hart
    Microsoft have just announced an update to Windows Phone 8. As one of the five, maybe six people who actually bought a WP8 handset I found this interesting. Then I read the blog post about it, and rushed off to write somewhat less than a thousand words about a single picture. The blog post announces an extra column of tiles on the start screen, and support for higher resolutions. If we ignore all the usual flummery about how this will make your life better, that (and the rotation lock) sounds a little like stage setting for tablets. Looking at the preview screenshot, I started to wonder. What it’s called Phablet_5F00_StartScreenProductivity_5F00_01_5F00_072A1240.jpg Pretty conclusive. If you can brand something a “phablet” and sleep at night you’re made of sterner stuff than I am, but that’s beside the point. It’s explicit in the post that Microsoft are expecting a broader range of form factors for WP8, but they stop short of quite calling out tablet size. The extra columns and resolution definitely back that up, so why stop at a 6 inch “phablet”? Sadly, the string of numbers there don’t really look like a Lumia model number – that would be a bit tendentious even for a speculative blog post about a single screenshot. “Productivity” is interesting too. I get into this a bit more below, but this is a pretty clear pitch for a business device. What it looks like Something that would look quite decent on a 7 inch screen, but something a bit too vertical to go toe-to-toe with the Surface. Certainly, it would look a lot better on a large-factor phone than any of the current models. Those tiles are going to get cramped and a bit ugly if the handsets aren’t getting bigger. What’s on it You have a bunch of missed calls, you rarely text, use a stocks app, and your budget spreadsheet and meeting notes are a thumb-reach away. Outlook is your main form of email. You care enough about LinkedIn to not only install its app but give it a huge live tile. There’s no beating about the bush here, the implicit persona is a corporate exec. With Nokia in the bag and Blackberry pushing daisies, that may not be a stupid play. There’s almost certainly a niche there if they can parlay their corporate credentials into filling it before BYOD (which functionally means an iPhone) reaches the late adopters. The really quite slick WP8 Office implementation ought to help here. This is the face they’ve chosen to present, the cultural milieu they’re normalizing for Windows Phone. It’s an iPhone for Serious Business Grown-ups. Could work, I guess. Does it mean anything? Is the latest WP8 update a sign that we can expect to see tablets running Windows Phone rather than WinRT? Well, WinRT tablets haven’t exactly taken off but I’m not quite going to make a leap like that just from a file name and a column of icons. I feel pretty safe, however, conjecturing that Microsoft would like to squeeze a WP8 “phablet” into the palm of every exec who’s ever grumbled about their Blackberry, and this release might get them a bit closer. If it works well incrementing up to larger devices, then that could be a fair hedge against WinRt crashing and burning any harder in the marketplace.

    Read the article

  • maintaining a growing, diverse codebase with continuous integration

    - by Nate
    I am in need of some help with philosophy and design of a continuous integration setup. Our current CI setup uses buildbot. When I started out designing it, I inherited (well, not strictly, as I was involved in its design a year earlier) a bespoke CI builder that was tailored to run the entire build at once, overnight. After a while, we decided that this was insufficient, and started exploring different CI frameworks, eventually choosing buildbot. One of my goals in transitioning to buildbot (besides getting to enjoy all the whiz-bang extras) was to overcome some of the inadequacies of our bespoke nightly builder. Humor me for a moment, and let me explain what I have inherited. The codebase for my company is almost 150 unique c++ Windows applications, each of which has dependencies on one or more of a dozen internal libraries (and many on 3rd party libraries as well). Some of these libraries are interdependent, and have depending applications that (while they have nothing to do with each other) have to be built with the same build of that library. Half of these applications and libraries are considered "legacy" and unportable, and must be built with several distinct configurations of the IBM compiler (for which I have written unique subclasses of Compile), and the other half are built with visual studio. The code for each compiler is stored in two separate Visual SourceSafe repositories (which I am simply handling using a bunch of ShellCommands, as there is no support for VSS). Our original nightly builder simply took down the source for everything, and built stuff in a certain order. There was no way to build only a single application, or pick a revision, or to group things. It would launched virtual machines to build a number of the applications. It wasn't very robust, it wasn't distributable. It wasn't terribly extensible. I wanted to be able to overcame all of these limitations in buildbot. The way I did this originally was to create entries for each of the applications we wanted to build (all 150ish of them), then create triggered schedulers that could build various applications as groups, and then subsume those groups under an overall nightly build scheduler. These could run on dedicated slaves (no more virtual machine chicanery), and if I wanted I could simply add new slaves. Now, if we want to do a full build out of schedule, it's one click, but we can also build just one application should we so desire. There are four weaknesses of this approach, however. One is our source tree's complex web of dependencies. In order to simplify config maintenace, all builders are generated from a large dictionary. The dependencies are retrieved and built in a not-terribly robust fashion (namely, keying off of certain things in my build-target dictionary). The second is that each build has between 15 and 21 build steps, which is hard to browse and look at in the web interface, and since there are around 150 columns, takes forever to load (think from 30 seconds to multiple minutes). Thirdly, we no longer have autodiscovery of build targets (although, as much as one of my coworkers harps on me about this, I don't see what it got us in the first place). Finally, aformentioned coworker likes to constantly bring up the fact that we can no longer perform a full build on our local machine (though I never saw what that got us, either, considering that it took three times as long as the distributed build; I think he is just paranoically phobic of ever breaking the build). Now, moving to new development, we are starting to use g++ and subversion (not porting the old repository, mind you - just for the new stuff). Also, we are starting to do more unit testing ("more" might give the wrong picture... it's more like any), and integration testing (using python). I'm having a hard time figuring out how to fit these into my existing configuration. So, where have I gone wrong philosophically here? How can I best proceed forward (with buildbot - it's the only piece of the puzzle I have license to work on) so that my configuration is actually maintainable? How do I address some of my design's weaknesses? What really works in terms of CI strategies for large, (possibly over-)complex codebases?

    Read the article

  • Wireless Network Found, can't connect, repeated requests for authentication

    - by Herm Holland
    After trawling through the internet, on forums, support websites, and through dozens upon dozens of answered questions on this site, I've not found a solution to what seems like a fairly regular problem... I cannot connect to a wireless network, and am continually asked for the network password. I have tried countless suggested solutions on the different locations I've already referred to. None of them have worked. Details of my experience are as follows: I have just recently installed Ubuntu 12.04.1 (32-bit). Ubuntu installed on my system seemingly fine, and I even formatted my hard drive during the process. It's as if it were a new desktop computer. During the installation I was asked to connect to a Wireless Network. I have a USB Wireless Card connected which I have used to connect desktop PC's, laptops, and a Wii to the internet from approximately the same area of the house (thus the same distance from the Wireless Router). I chose my network, entered the correct password for it (I double checked; it's definitely the right password) and proceeded with the installation. Several times before the installation was complete, I was asked to authenticate the connection, and this seemed to do nothing each time. On the repeated screens the password was already entered in the appropriate box. When Ubuntu booted up the first thing I was faced with (other than something about Language settings, or something) was another request for authentication. Again, the password was already there, so I clicked connect. It did not connect. Instead, I was once again faced with repeated requests every few minutes. I went onto my laptop, which is connected to this network, checked the details of the network, and entered them manually into my Ubuntu PC (including the IPv4 and IPv6 information) but this didn't work either, so I set it back to finding the settings automatically. Note, also, that the "Connect automatically" and "Available to all users" boxes are checked, and have been unchecked & rechecked countless times. I have also tried having my User account connect automatically, and to need a password entered at the welcome screen. Whilst I've been writing this, it has gone through a spat of connecting successfully to the network for less than a minute, before coming offline again, only to repeat the process. But it has now returned to prompting me for a password every couple of minutes. This computer has already run on the Fedora OS, and had no trouble connecting to, and maintaining a connection. I also have a laptop running Windows 7 less than a metre away from this desktop PC, which is connected and has no trouble maintaining a connection at 50%-100% strength (fluctuating). Therefore: - I know it's not the wireless card - I know it's not the PC itself - I know it's not the access point - I know it's not the location of my PC or wireless card - It is solely because of Ubuntu Everything else has worked fine, but the moment Ubuntu was introduced into the equation, it has gone completely wrong. Honestly; I prefer Ubuntu as an OS to Fedora, but if I can't solve the problem it'll be straight back to Fedora that I'll have to go. Can anyone help me at all?

    Read the article

  • DAC pack up all your troubles

    - by Tony Davis
    Visual Studio 2010, or perhaps its apparently-forthcoming sister, "SQL Studio", is being geared up to become the natural way for developers to create databases. Central to this drive is the introduction of 'data-tier application components', or DACs. Applications are developed as normal but when it comes to deployment, instead of supplying the DBA with a bunch of scripts to create the required database objects, the developer creates a single DAC Package ("DAC Pack"); a zipped XML file containing all the database objects needed by the application, along with versioning information, policies for deployment, and so on. It's an intriguing prospect. Developers can work on their development database using their existing tools and source control, and then package up the changes into a single DACPAC for deployment and management. DBAs get an "application level view" of how their instances are being used and the ability to collectively, rather than individually, manage the objects. The DBA needing to manage a large number of relatively small databases can use "DAC snapshots" to get a quick overview of what has changed across all the databases they manage. The reason that DAC packs haven't caused more excitement is that they can only be pushed to SQL Server 2008 R2, and they must be developed or inspected using Visual Studio 2010. Furthermore, what we see right now in VS2010 is more of a 'work-in-progress' or 'vision of the future', with serious shortcomings and restrictions that render it unsuitable for anything but small 'non-critical' departmental databases. The first problem is that DAC packs support a limited set of schema objects (corresponding closely to the features available on 'Azure'). This means that Service Broker queues, CLR Objects, and perhaps most critically security (permissions, certificates etc.), are off-limits. Applications that require these objects will need to add them via a post-deployment TSQL script, rather defeating the whole idea. More worrying still is the process for altering a database with a DAC pack. The grand 'collective' philosophy, whereby a single XML file can be used for deploying and managing builds and changes, extends, unfortunately, to database upgrades. Any change to a database object will result in the creation of a new database, copying the data from the old version, nuking the previous one, and then renaming the new one. Simple eh? The problem is that even something as trivial as adding a comment to a stored procedure in a 5GB database will require the server to find at least twice as much space, as well sufficient elbow-room in the transaction log for copying the largest table. Of course, you'll need to take the database offline for the full course of the deployment, which is likely to take a long time if there is a lot of data. This upgrade/rename process breaks the log chain, makes any subsequent full restore operation highly complicated, and will also break log shipping. As with any grand vision, the devil is always in the detail. It's hard to fathom why Microsoft hasn't used a SQL Compare-style approach to the upgrade process, altering a database with a change script, and this will surely be adopted in the near future. Something had to be in place for VS2010, but right now DAC packs only make sense for Azure. For this, they're cute, but hardly compelling. Nevertheless, DBAs would do well to get familiar with VS 2010 and DAC packs. Like it or not, they're both coming. Cheers, Tony.

    Read the article

  • Adding custom interfaces to your mock instance.

    - by mehfuzh
    Previously, i made a post  showing how you can leverage the dependent interfaces that is implemented by JustMock during the creation of mock instance. It could be a informative post that let you understand how JustMock behaves internally for class or interfaces implement other interfaces into it. But the question remains, how you can add your own custom interface to your target mock. In this post, i am going to show you just that. Today, i will not start with a dummy class as usual rather i will use two most common interfaces in the .NET framework  and create a mock combining those. Before, i start i would like to point out that in the recent release of JustMock we have extended the Mock.Create<T>(..) with support for additional settings though closure. You can add your own custom interfaces , specify directly the real constructor that should be called or even set the behavior of your target. Doing a fast forward directly to the point,  here goes the test code for create a creating a mock that contains the mix for ICloneable and IDisposable using the above mentioned changeset. var myMock = Mock.Create<IDisposable>(x => x.Implements<ICloneable>()); var myMockAsClonable = myMock as ICloneable; bool isCloned = false;   Mock.Arrange(() => myMockAsClonable.Clone()).DoInstead(() => isCloned = true);   myMockAsClonable.Clone();   Assert.True(isCloned);   Here, we are creating the target mock for IDisposable and also implementing ICloneable. Finally, using the “as” for getting the ICloneable reference accordingly arranging it, acting on it and asserting if the expectation is met properly. This is a very rudimentary example, you can do the same for a given class: var realItem = Mock.Create<RealItem>(x => {     x.Implements<IDisposable>();     x.CallConstructor(() => new RealItem(0)); }); var iDispose = realItem as IDisposable;     iDispose.Dispose(); Here, i am also calling the real constructor for RealItem class.  This is to mention that you can implement custom interfaces only for non-sealed classes or less it will end up with a proper exception. Also, this feature don’t require any profiler, if you are agile or running it inside silverlight runtime feel free to try it turning off the JM add-in :-). TIP :  Ability to  specify real constructor could be a useful productivity boost in cases for code change and you can re-factor the usage just by one click with your favorite re-factor tool.   That’s it for now and hope that helps Enjoy!!

    Read the article

  • How to protect UI components using OPSS Resource Permissions

    - by frank.nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid black 1.0pt; mso-border-alt:solid black .5pt; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid black; mso-border-insidev:.5pt solid black; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} ADF security protects ADF bound pages, bounded task flows and ADF Business Components entities with framework specific JAAS permissions classes (RegionPermission, TaskFlowPermission and EntityPermission). If used in combination with the ADF security expression language and security checks performed in Java, this protection already provides you with fine grained access control that can also be used to secure UI components like buttons and input text field. For example, the EL shown below disables the user profile panel tabs for unauthenticated users: <af:panelTabbed id="pt1" position="above">   ...   <af:showDetailItem        text="User Profile" id="sdi2"                                       disabled="#{!securityContext.authenticated}">   </af:showDetailItem>   ... </af:panelTabbed> The next example disables a panel tab item if the authenticated user is not granted access to the bounded task flow exposed in a region on this tab: <af:panelTabbed id="pt1" position="above">   ...   <af:showDetailItem text="Employees Overview" id="sdi4"                        disabled="#{!securityContext.taskflowViewable         ['/WEB-INF/EmployeeUpdateFlow.xml#EmployeeUpdateFlow']}">   </af:showDetailItem>   ... </af:panelTabbed> Security expressions like shown above allow developers to check the user permission, authentication and role membership status before showing UI components. Similar, using Java, developers can use code like shown below to verify the user authentication status: ADFContext adfContext = ADFContext.getCurrent(); SecurityContext securityCtx = adfContext.getSecurityContext(); boolean userAuthenticated = securityCtx.isAuthenticated(); Note that the Java code lines use the same security context reference that is used with expression language. But is this all that there is? No ! The goal of ADF Security is to enable all ADF developers to build secure web application with JAAS (Java Authentication and Authorization Service). For this, more fine grained protection can be defined using the ResourcePermission, a generic JAAS permission class owned by the Oracle Platform Security Services (OPSS).  Using the ResourcePermission  class, developers can grant permission to functional parts of an application that are not protected by page or task flow security. For example, an application menu allows creating and canceling product shipments to customers. However, only a specific user group - or application role, which is the better way to use ADF Security - is allowed to cancel a shipment. To enforce this rule, a permission is needed that can be used declaratively on the UI to hide a menu entry and programmatically in Java to check the user permission before the action is performed. Note that multiple lines of defense are what you should implement in your application development. Don't just rely on UI protection through hidden or disabled command options. To create menu protection permission for an ADF Security enable application, you choose Application | Secure | Resource Grants from the Oracle JDeveloper menu. The opened editor shows a visual representation of the jazn-data.xml file that is used at design time to define security policies and user identities for testing. An option in the Resource Grants section is to create a new Resource Type. A list of pre-defined types exists for you to create policy definitions for. Many of these pre-defined types use the ResourcePermission class. To create a custom Resource Type, for example to protect application menu functions, you click the green plus icon next to the Resource Type select list. The Create Resource Type editor that opens allows you to add a name for the resource type, a display name that is shown when granting resource permissions and a description. The ResourcePermission class name is already set. In the menu protection sample, you add the following information: Name: MenuProtection Display Name: Menu Protection Description: Permission to grant menu item permissions OK the dialog to close the resource permission creation. To create a resource policy that can be used to check user permissions at runtime, click the green plus icon in the Resources section of the Resource Grants section. In the Create Resource dialog, provide a name for the menu option you want to protect. To protect the cancel shipment menu option, create a resource with the following settings Resource Type: Menu Protection Name: Cancel Shipment Display Name: Cancel Shipment Description: Grant allows user to cancel customer good shipment   A new resource Cancel Shipmentis added to the Resources panel. Initially the resource is not granted to any user, enterprise or application role. To grant the resource, click the green plus icon in the Granted To section, select the Add Application Role option and choose one or more application roles in the opened dialog. Finally, you click the process action to define the policy. Note that permission can have multiple actions that you can grant individually to users and roles. The cancel shipment permission for example could have another action "view" defined to determine which user should see that this option exist and which users don't. To use the cancel shipment permission, select the disabled property on a command item, like af:commandMenuItem and click the arrow icon on the right. From the context menu, choose the Expression Builder entry. Expand the ADF Bindings | securityContext node and click the userGrantedResource option. Hint: You can expand the Description panel below the EL selection panel to see an example of how the grant should look like. The EL that is created needs to be manually edited to show as #{!securityContext.userGrantedResource[               'resourceName=Cancel Shipment;resourceType=MenuProtection;action=process']} OK the dialog so the permission checking EL is added as a value to the disabled property. Running the application and expanding the Shipment menu shows the Cancel Shipments menu item disabled for all users that don't have the custom menu protection resource permission granted. Note: Following the steps listed above, you create a JAAS permission and declaratively configure it for function security in an ADF application. Do you need to understand JAAS for this? No!  This is one of the benefits that you gain from using the ADF development framework. To implement multi lines of defense for your application, the action performed when clicking the enabled "Cancel Shipments" option should also check if the authenticated user is allowed to use process it. For this, code as shown below can be used in a managed bean public void onCancelShipment(ActionEvent actionEvent) {       SecurityContext securityCtx =       ADFContext.getCurrent().getSecurityContext();   //create instance of ResourcePermission(String type, String name,   //String action)   ResourcePermission resourcePermission =     new ResourcePermission("MenuProtection","Cancel Shipment",                            "process");        boolean userHasPermission =          securityCtx.hasPermission(resourcePermission);   if (userHasPermission){       //execute privileged logic here   } } Note: To learn more abput ADF Security, visit http://download.oracle.com/docs/cd/E17904_01/web.1111/b31974/adding_security.htm#BGBGJEAHNote: A monthly summary of OTN Harvest blog postings can be downloaded from ADF Code Corner. The monthly summary is a PDF document that contains supporting screen shots for some of the postings: http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html

    Read the article

  • Windows Phone 7 Series - Tools and Resources

    - by TechTwaddle
    Unless you've been living in the caves of Lascaux for the past couple of days, you probably know what's happening in the world of Windows Phone. Microsoft unveiled the developer tools required to develop applications and games for Windows Phone 7 at MIX10 a couple of days back. Silverlight and XNA being the major frameworks, no big surprise there. And the best news of all is that all the development tools are free! So if you are planning to develop apps for Windows Phone 7, read on. The first place, or more appropriately hub, for you is the Windows Phone Developer Portal. It has most of the information you need to get you started. Now there is a ton of information available at other places too. In this post, I take time to put all the information that I found useful at one place, and I'll keep updating this as and when I find new stuff.   Setting up the development environment 1. Install Windows Phone Developer Tools CTP (Community Technology Preview) This will install Visual Studio 2010 Express, Silverlight, XNA framework and emulator for Windows Phone 7. It also installs a few support tools. 2. Expression Blend 4 for Windows Phone:     - Install Expression Blend 4 beta     - Install Expression Blend Add-in Preview for Windows Phone     - Install Expression Blend SDK Preview for Windows Phone Installing the above tools should set your machine up for development. I installed the tools on my Windows Vista SP1 machine and the process went smoothly without running into any major hitch. Note that the tools won't install on Windows XP, read the release notes of the CTP. Resources and Documentation 1. Microsoft Windows Phone 7 Series Developer Training Kit 2. Programming Windows Phone 7 Series by Charles Petzold. Contains few chapters only. Gives a good preview. 3. MSDN documentation for Windows Phone 7 Development 4. A sample chapter from Learning Windows Phone Programming [PDF] by Yochay Kiriaty and Jaime Rodriguez. Complete book will be available at a later time. 5. Windows Phone 7 Developer Forum - where you can ask questions and problems you run into and the experts are there to help you. 6. For Silverlight visit silverlight.net and for XNA game development, the XNA Creators Club is the place to go, also make sure you follow Michael Klutcher's and Shawn Hargreaves' blog. 7. And finally the MIX'10 website. Most of the sessions will be available for download later (some are already available). Click on the Windows Phone tag to get all the session details and downloads.   If you are completely new to Silverlight and XNA (like me), and C# makes some sense to you then I suggest you go through the Developer Training Kit. It gives a good start and ramps you up pretty quickly.

    Read the article

  • Managing Scripts in Oracle SQL Developer

    - by thatjeffsmith
    You backup your databases, right? You backup you home computer – your media collection, tax documents, bank accounts, etc, right? You backup your handy-dandy SQL scripts, right? Ok, now that I’ve got your head nodding, I want to answer a question I get every so often: How can I manage my scripts in SQL Developer? This is an interesting question. First, it assumes that one SHOULD manage their scripts in their IDE. Now, what I think the question generally gets around to is, how can we: Navigate to our scripts Open them Execute them What a good IDE should have is an interface to your existing Version Control System (VCS.) SQL Developer supports out-of-the-box both Subversion and Git. You can also download an extension via check-for-updates to get support for CVS. Now, what I’m about to show you COULD be done without versioning and controlling your scripts – but I want to ask you why you wouldn’t want to do this? So, I’m going to proceed and assume that you do INDEED version your scripts already. Seeing what scripts you’ve already got in your repository This is very straightforward – just open the Team Versions panel. Then connect to your repository. Shows you the files in your source control system. Now, I could ‘preview’ said file right away. If I open the file from here, we get a temp file copy down from the server to the local machine. This is a local temp copy of the controlled script – I can read/execute, but not write to it. And that might be all you need. But, if your script calls other scripts, then you’re going to want to check out the server copy of your stuff down your local SVN working copy directory. That way when your script calls another script – you’re executing the PRODUCTION APPROVED copies of said scripts. And if you do SPOOL or other file I/O stuff, it will work as expected. To get to those said client copies of your scripts… Enter the Files Panel The Files panel is accessible from the View menu. You can get to your files, one of two ways. If you’ve touched the file recently, you can see it under the Recent tree. Otherwise, you can navigate to your local ‘checked out’ copies of your script(s). Open your local copies, see what’s changed, etc. And I can access the change history and see what’s been touched… What changes am I going to ‘push out’ if I commit this back to the server? Most of us work on teams, yes? This panel also gives me a heads up if someone else is making changes to the same file. I can see the ‘incoming’ changes as well. To Sum It Up… If I want to get a script to run: do a full get to your local directory open the script(s) The files panel will tell you if your local copy is out of date from the server and if you have made local changes you’ve forgotten to commit back up to the server and your fellow teammates. Now, if you’re the selfish type and don’t want to share, that’s fine. But you should still be backing up your scripts, and you can still use the Files panel to manage your scripts.

    Read the article

  • Visual Studio 2010 Best Practices

    - by Etienne Tremblay
    I’d like to thank Packt for providing me with a review version of Visual Studio 2010 Best Practices eBook. In fairness I also know the author Peter having seen him speak at DevTeach on many occasions.  I started by looking at the table of content to see what this book was about, knowing that “best practices” is a real misnomer I wanted to see what they were.  I really like the fact that he starts the book by really saying they are not really best practices but actually recommend practices.  As a Team Foundation Server user I found that chapter 2 was more for the open source crowd and I really skimmed it.  The portion on Branching was well documented, although I’m not a fan of the testing branch myself, but the rest was right on. The section on merge remote changes (bring the outside to you) paradigm is really important and was touched on. Chapter 3 has good solid practices on low level constructs like generics and exceptions. Chapter 4 dives into architectural practices like decoupling, distributed architecture and data based architecture.  DTOs and ORMs are touched on briefly as is NoSQL. Chapter 5 is about deployment and is really a great primer on all the “packaging” technologies like Visual Studio Setup and Deployment (depreciated in 2012), Click Once and WIX the major player outside of commercial solutions.  This is a nice section on how to move from VSSD to WIX this is going to be important in the coming years due to the fact that VS 2012 doesn’t support VSSD. In chapter 6 we dive into automated testing practices, including test coverage, mocking, TDD, SpecDD and Continuous Testing.  Peter covers all those concepts really nicely albeit succinctly. Being a book on recommended practices I find this is really good. I really enjoyed chapter 7 that gave me a lot of great tips to enhance my Visual Studio “experience”.  Tips on organizing projects where good.  Also even though I knew about configurations I like that he put that in there so you can move all your settings to another machine, a lot of people don’t know about that. Quick find and Resharper are also briefly covered.  He touches on macros (depreciated in 2012).  Finally he touches on Continuous Integration a very important concept in today’s ALM landscape. Chapter 8 is all about Parallelization, threads, Async, division of labor, reactive extensions.  All those concepts are touched on and again generalized approaches to those modern problems are giving.       Chapter 9 goes into distributed apps, the most used and accepted practice in the industry for .NET projects the chapter tackles concepts like Scalability, Messaging and Cloud (the flavor of the month of distributed apps, although I think this will stick ;-)).  He also looks a protocols TCP/UDP and how to debug distributed apps.  He touches on logging and health monitoring. Chapter 10 tackles recommended practices for web services starting with implementing WCF services, which goes into all sort of goodness like how to host in IIS or self-host.  How to manual test WCF services, also a section on authentication and authorization.  ASP.NET Web services are also touched on in that chapter All in all a good read, nice tips and accepted practices.  I like the conciseness of the subjects and Peter touches on a lot of things in this book and uses a lot of the current technologies flavors to explain the concepts.   Cheers, ET

    Read the article

  • Using the latest (stable release) of Oracle Developer Tools for Visual Studio 11.1.0.7.20.

    - by mbcrump
    +  = Simple and safe Data connections.   This guide is for someone wanting to use the latest ODP.NET quickly without reading the official documentation. This guide will get you up and running in about 15 minutes. I have reviewed my referral link to my simple Setting up ODP.net with Win7 x64 and noticed most people were searching for one of the following terms: “how to use odp.net with vs” “setup connection odp.net” “query db using odp and vs” While my article provided links and a sample tnsnames.ora file, it really didn’t tell you how to use it. I’m hoping that this brief tutorial will help. So before we get started, you will need the following: Download the following: www.oracle.com/technology/software/tech/dotnet/utilsoft.html from oracle and install it. It is the first one on the page. Visual Studio 2008 or 2010. It should be noted that The System.Data.OracleClient namespace is the OLD .NET Framework Data Provider for Oracle. It should not be used anymore as it has been depreciated. The latest version which is what we are using is Oracle.DataAccess.Client. First things first, Add a reference to the Oracle.DataAccess.Client after you install ODP.NET   Copy and paste the following C# code into your project and replace the relevant info including the query string and you should be able to return data. I have commented several lines of code to assist in understanding what it is doing.   Lambda Expression. using System; using System.Data; using Oracle.DataAccess.Client;   namespace ConsoleApplication1 {     class Program     {         static void Main(string[] args)         {           try         {             //Setup DataSource             string oradb = "Data Source=(DESCRIPTION ="                                    + "(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521)))"                                    + "(CONNECT_DATA = (SERVICE_NAME = SERVICENAME))) ;"                                    + "Persist Security Info=True;User ID=USER;Password=PASSWORD;";                        //Open Connection to Oracle - this could be moved outside the try.             OracleConnection conn = new OracleConnection(oradb);             conn.Open();               //Create cmd and use parameters to prevent SQL injection attacks.             OracleCommand cmd = new OracleCommand();             cmd.Connection = conn;               cmd.CommandText = "select username from table where username = :username";               OracleParameter p1 = new OracleParameter("username", OracleDbType.Varchar2);             p1.Value = username;             cmd.Parameters.Add(p1);               cmd.CommandType = CommandType.Text;               OracleDataReader dr = cmd.ExecuteReader();             dr.Read();               //Contains the value of the datarow             Console.WriteLine(dr["username"].ToString());               //Disposes of objects.             dr.Dispose();             cmd.Dispose();             conn.Dispose();         }           catch (OracleException ex) // Catches only Oracle errors         {             switch (ex.Number)             {                 case 1:                     Console.WriteLine("Error attempting to insert duplicate data.");                     break;                 case 12545:                     Console.WriteLine("The database is unavailable.");                     break;                 default:                     Console.WriteLine(ex.Message.ToString());                     break;             }         }           catch (Exception ex) // Catches any error not previously caught         {                   Console.WriteLine("Unidentified Error: " + ex.Message.ToString());              }         }       }           } At this point, you should have a working Program that returns data from an oracle database. If you are still having trouble then drop me a line and I will be happy to assist. As of this writing, oracle has announced the latest beta release of ODP.NET 11.2.0.1.1 Beta.  This release includes .NET Framework 4 and .NET Framework 4 Client Profile support. You may want to hold off on this version for a while as its BETA, and I wouldn’t want any production code using any BETA software.

    Read the article

  • New Rapid Install StartCD 12.2.0.48 for EBS 12.2 Now Available

    - by Max Arderius
    A new Rapid Install startCD (Patch 18086193) for Oracle E-Business Suite Release 12.2 is now available. We recommend that all EBS customers installing or upgrading to EBS 12.2 use this latest update. The startCD updates are distributed to customers via My Oracle Support Patch which can be uncompressed on top of any previous 12.2 startCD under the main staging area. This patch replaces any previous startCDs. What's New in This Update? This new startCD version 12.2.0.48 includes important fixes for multi-node Installs, RAC, pre-install checks, platform specific issues, and upgrade scenario failures: 18703814 - QREP:122:RI:ISSUE WITH CHECKOS.CMD 18689527 - QREP:122:RI:ISSUE WITH FNDCORE.DLL SHIPPED AS PART OF R122 PACKAGE 18548485 - QREP1224:4:JAR SIGNER ISSUE DUE TO THE RI UPGRADE AUTOCONFIG CHANGES 18535812 - QREP:1220.48_4: 12.2.0 UPGRADE FILE SYSTEM LAY OUT IS AFFECTING THE DB TABLES 18507545 - WIN: UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING RUNNING DB 18476041 - UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING PRODUCTION DB 18459887 - R12.2 INSTALLATION FAILURE - OPMNCTL: NOT FOUND 18436053 - START CD 48_4 - ISSUES WITH TEMP SPACE CHECK 18424747 - QREP1224.3:ADD SERVER BROWSE BUTTON NOT WORKING 18421132 - *RW-50010: ERROR: - SCRIPT HAS RETURNED AN ERROR: 1 18403700 - QREP122.48:RI:UPGRADE RI PRECHECK HUNG IN SPLIT TIER APPS NODE ( NO SILENT ) 18383075 - ADD VERBOSE OPTION TO RAC VALIDATION 18363584 - UPTAKE INSTALL SCRIPTS FOR XB48_4 18336093 - QREP:122:RI:PATCH FS ADMIN SERVICE RUNNING AFTER RI UPGRADE CONFIGURE MODE 18320278 - QREP:1224.3:PLATFORM SPECIFIC SYNTAX ERRORS WITH DATE COMMAND IN DB CHECKER 18314643 - DISABLE SID=DB_NAME FOR RI UPGRADE FLOW IN RAC 18298977 - RI: EXCEPTION WHILE CLICKING RAC NODES BUTTON ON A NON-RAC SERVER 18286816 - QREP122:STARTCD48_3:TRAVERSING FROM VISION PASSW SCREEN TO PROD 18286371 - QREP122:STARTCD48_3:AMBIGUOUS MESSAGE DURING STAGE AREA CHECK ON HP 18275403 - QREP122:48:RI UPGRADE WITH EOH POST CHECKS HANGS IN SPLIT TIER DB NODE 18270631 - QREP122.48:MULTI-NODE RI USING NON-DEFAULT PASSWORDS NOT WORKING 18266046 - QREP122:48:RI NOT ALLOWING TO IGNORE THE RAC PRE-CHECK FAILURE 18242201 - UPTAKE TXK INSTALL SCRIPTS AND PLATFORMS.ZIP INTO STARTCD XB48_3 18236428 - QREP122.47:RI UPGRADE EXISTING OH FOR NON-DEFAULT APPS PASSWORD NOT WORKING 18220640 - INCONSISTENT DATABASE PORTS DURING EBS 12.2 INSTALLATION FOR STARTCD 12.2.0.47 18138796 - QREP122:47:RI 10.1.2 TECHSTACK NOT WORKING IF WE RUN RI FROM NEW STARTCD LOC 18138396 - TST1220: CONTROL FILE NAMING IN RAPID INSTALL SEEMS TO HAVE ISSUES 18124144 - IMPROVE HANDLING ERRORS FOUND IN CLUVFY LOG DURING PREINSTALL CHECKS 18111361 - VALIDATE ASM DB DATA FILES PATH AS +<DATA GROUP>/<PATH> 18102504 - QREP1220.47_5: UNZIP PANEL DOES NOT CREATE THE CORRECT STAGE 18083342 - 12.2 UPGRADE JAVA.NET.BINDEXCEPTION: CANNOT ASSIGN REQUESTED ADDRESS 18082140 - QREP122:47:RAC DB VALIDATION IS FAILS WITH EXIT STATUS IS 6 18062350 - 12.2.3 UPG: 12.2.0 INSTALLATION LOGS 18050840 - RI: UPGRADE WITH EXISTING RAC OH:SECONDARY DB NODE NAME IS BLANK 18049813 - RAC LOV DEFAULTS NOT SAVED UNLESS "SELECT" IS CLICKED 18003592 - TST1220:ADDITIONAL FREE SPACE CHECK FOR RI NEEDS TO BE CHECKED 17981471 - REMOVE ASM SPACE CHECK FROM RACVALIDATIONS.SH 17942179 - R12.2 INSTALL FAILING AT ADRUN11G.SH WITH ERRORS RW-50004 & RW-50010 17893583 - QREP1220.47:VALIDATION OF O.S IN RAPIDWIZ IN THE DB NODE CONFIGURATION SCREEN 17886258 - CLEANUP FND_NODES DURING UPGRADE FLOW 17858010 - RI POST INSTALL CHECKS (SSH VERIFICATION) STEP IS FAILING 17799807 - GEOHR: 12.2.0 - ERRORS IN RAPIDWIZ AND ADCONFIG LOGS 17786162 - QREP1223.4:RI:SERVICE_NAMES IS PRINTED AS SERVICE_NAME IN RI SCREEN 17782455 - RI: CONFIRM DEFAULT APPS PASSWORD IN SILENT MODE KICKOFF 17778130 - RI:ADMIN SERVER TO BE UP ON PRIMARY MID-TIER IN MULTI-NODE UPGRADE FS CREATION 17773989 - UN-SUPPORTED PLATFORM SHOWS 32 BIT AS HARD-CODED 17772655 - RELEVANT MESSAGE DURING THE RAPDIWIZ -TECHSTACK 17759279 - VERIFICATION PANEL DOES NOT EXPAND TECHNOLOGY STACK 17759183 - BUILDSTAGE SCRIPT MENU NEEDS TO BE ADJUSTED 17737186 - DATABASE PRE-REQ CHECK INCORRECTLY REPORTS SUCCESS ON AIX 17708082 - 12.2 INSTALLATION - OS PRE-REQUISITES CHECK 17701676 - TST122: GENERATE WRONG S_DBSID FOR PATCH FILE SYSTEM AT PHASE PREPARE 17630972 - /TMP PRE-REQ INSTALLATION CHECK 17617245 - 12.2 VISION INSTALL FAILS ON AIX 17603342 - OMCS: DB STAGING COMPLAINS WHILE MOVING IT TO FINAL LOCATION 17591171 - OMCS: DB STAGING FAILS WITH FRESH INSTALL R12.2 17588765 - CHECKER VERSION AND PLUGIN VERSION 17561747 - BUILDSTAGE.SH FAILS WITH ERROR WHEN STAGE HOSTED ON 32BIT LINUX 17539198 - RAPID INSTALL NEEDS TO IGNORE NON-REQUIRED STAGE ELEMENTS 17272808 - APPS USERS THAT HAVE DEFAULT PASSWORD AFTER 12.2 RAPID INSTALL References 12.2 Documentation Library 1581299.1 : EBS 12.2 Product Information Center 1320300.1 : Oracle E-Business Suite Release Notes, Release 12.2 1606170.1 : Oracle E-Business Suite Technology Stack and Applications DBA Release Notes for Release 12.2.3 1624423.1 : Oracle E-Business Suite Technology Stack and Applications DBA Release Notes for R12.TXK.C.Delta.4 and R12.AD.C.Delta.4 1594274.1 : Oracle E-Business Suite Release 12.2: Consolidated List of Patches and Technology Bug Fixes Related Articles Oracle E-Business Suite 12.2 Now Available startCD options to install Oracle E-Business Suite Release 12.2

    Read the article

  • How to "apt-get -f install" without deleting software?

    - by Jeggy
    I know Guitar pro doesn't support 64 bit, but i did get it to work with this command jeggy@jeggy-XPS:~$ sudo dpkg --force-architecture -i GuitarPro6-rev9063.deb [sudo] password for jeggy: Selecting previously unselected package guitarpro6:i386. (Reading database ... 285729 files and directories currently installed.) Unpacking guitarpro6:i386 (from GuitarPro6-rev9063.deb) ... dpkg: dependency problems prevent configuration of guitarpro6:i386: guitarpro6:i386 depends on gksu. dpkg: error processing guitarpro6:i386 (--install): dependency problems - leaving unconfigured Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Errors were encountered while processing: guitarpro6:i386 And even after i get that error the program perfectly works fine and updating and adding PPA's to the system works great, but when I'm trying to install some other software i get this error: jeggy@jeggy-XPS:~$ sudo apt-get install elinks Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: elinks : Depends: libfsplib0 (>= 0.9) but it is not going to be installed Depends: liblua50 (>= 5.0.3) but it is not going to be installed Depends: liblualib50 (>= 5.0.3) but it is not going to be installed Depends: libtre5 but it is not going to be installed Depends: elinks-data (= 0.12~pre5-7ubuntu1) but it is not going to be installed guitarpro6:i386 : Depends: gksu:i386 but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And whenever i write "apt-get -f install" i get this jeggy@jeggy-XPS:~$ sudo apt-get -f install [sudo] password for jeggy: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: dconf-gsettings-backend:i386 python-levenshtein python-indicate libav-tools libstartup-notification0:i386 libxmuu1:i386 libavfilter-extra-2 libbabl-0.0-0 libgegl-0.0-0 libgconf2-4:i386 python-vobject libgtk-3-0:i386 libpam-cap:i386 python-utidylib libdconf0:i386 python-iniparse python-xmpp libpam-gnome-keyring:i386 libxcb-util0:i386 python-farstream Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: guitarpro6:i386 0 upgraded, 0 newly installed, 1 to remove and 7 not upgraded. 1 not fully installed or removed. After this operation, 84,0 MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 286979 files and directories currently installed.) Removing guitarpro6:i386 ... dpkg: warning: while removing guitarpro6:i386, directory '/opt/GuitarPro6/updater' not empty so not removed. dpkg: warning: while removing guitarpro6:i386, directory '/opt/GuitarPro6/Data/Soundbanks' not empty so not removed. Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... And now Guitar Pro is deleted. How can i install Guitar Pro and still be able to install other software afterwards?

    Read the article

< Previous Page | 844 845 846 847 848 849 850 851 852 853 854 855  | Next Page >