Search Results

Search found 15383 results on 616 pages for 'home automation'.

Page 265/616 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Silverlight 4 Twitter Client &ndash; Part 6

    - by Max
    In this post, we are going to look into implementing lists into our twitter application and also about enhancing the data grid to display the status messages in a pleasing way with the profile images. Twitter lists are really cool feature that they recently added, I love them and I’ve quite a few lists setup one for DOTNET gurus, SQL Server gurus and one for a few celebrities. You can follow them here. Now let us move onto our tutorial. 1) Lists can be subscribed to in two ways, one can be user’s own lists, which he has created and another one is the lists that the user is following. Like for example, I’ve created 3 lists myself and I am following 1 other lists created by another user. Both of them cannot be fetched in the same api call, its a two step process. 2) In the TwitterCredentialsSubmit method we’ve in Home.xaml.cs, let us do the first api call to get the lists that the user has created. For this the call has to be made to https://twitter.com/<TwitterUsername>/lists.xml. The API reference is available here. myService1.AllowReadStreamBuffering = true; myService1.UseDefaultCredentials = false; myService1.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService1.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsRequestCompleted); myService1.DownloadStringAsync(new Uri("https://twitter.com/" + GlobalVariable.getUserName() + "/lists.xml")); 3) Now let us look at implementing the event handler – ListRequestCompleted for this. public void ListsRequestCompleted(object sender, System.Net.DownloadStringCompletedEventArgs e) { if (e.Error != null) { StatusMessage.Text = "This application must be installed first."; parseXML(""); } else { //MessageBox.Show(e.Result.ToString()); parseXMLLists(e.Result.ToString()); } } 4) Now let us look at the parseXMLLists in detail xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.MinWidth = 70; b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } So here what am I doing, I am just dynamically creating a button for each of the lists and put them within a StackPanel and for each of these buttons, I am creating a event handler b_Click which will be fired on button click. We will look into this method in detail soon. For now let us get the buttons displayed. 5) Now the user might have some lists to which he has subscribed to. We need to create a button for these as well. In the end of TwitterCredentialsSubmit method, we need to make a call to http://api.twitter.com/1/<TwitterUsername>/lists/subscriptions.xml. Reference is available here. The code will look like this below. myService2.AllowReadStreamBuffering = true; myService2.UseDefaultCredentials = false; myService2.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService2.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsSubsRequestCompleted); myService2.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/subscriptions.xml")); 6) In the event handler – ListsSubsRequestCompleted, we need to parse through the xml string and create a button for each of the lists subscribed, let us see how. I’ve taken only the “full_name”, you can choose what you want, refer the documentation here. Note the point that the full_name will have @<UserName>/<ListName> format – this will be useful very soon. xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("full_name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.MinWidth = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } Please note, I am setting the button width to be auto based on the content and also giving it a midwidth value. I wanted to create a rounded corner buttons, but for some reason its not working. Also add this StackPanel – TwitterListStack of the Home.xaml <StackPanel HorizontalAlignment="Center" Orientation="Horizontal" Name="TwitterListStack"></StackPanel> After doing this, you would get a series of buttons in the top of the home page. 7) Now the button click event handler – b_Click, in this method, once the button is clicked, I call another method with the content string of the button which is clicked as the parameter. Button b = (Button)e.OriginalSource; getListStatuses(b.Content.ToString()); 8) Now the getListsStatuses method: toggleProgressBar(true); WebRequest.RegisterPrefix("http://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted); if (listName.IndexOf("@") > -1 && listName.IndexOf("/") > -1) { string[] arrays = null; arrays = listName.Split('/'); arrays[0] = arrays[0].Replace("@", " ").Trim(); //MessageBox.Show(arrays[0]); //MessageBox.Show(arrays[1]); string url = "http://api.twitter.com/1/" + arrays[0] + "/lists/" + arrays[1] + "/statuses.xml"; //MessageBox.Show(url); myService.DownloadStringAsync(new Uri(url)); } else myService.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/" + listName + "/statuses.xml")); Please note that the url to look at will be different based on the list clicked – if its user created, the url format will be http://api.twitter.com/1/<CurentUser>/lists/<ListName>/statuses.xml But if it is some lists subscribed, it will be http://api.twitter.com/1/<ListOwnerUserName>/lists/<ListName>/statuses.xml The first one is pretty straight forward to implement, but if its a list subscribed, we need to split the listName string to get the list owner and list name and user them to form the string. So that is what I’ve done in this method, if the listName has got “@” and “/” I build the url differently. 9) Until now, we’ve been using only a few nodes of the status message xml string, now we will look to fetch a new field - “profile_image_url”. Images in datagrid – COOL. So for that, we need to modify our Status.cs file to include two more fields one string another BitmapImage with get and set. public string profile_image_url { get; set; } public BitmapImage profileImage { get; set; } 10) Now let us change the generic parseXML method which is used for binding to the datagrid. public void parseXML(string text) { XDocument xdoc; xdoc = XDocument.Parse(text); statusList = new List<Status>(); statusList = (from status in xdoc.Descendants("status") select new Status { ID = status.Element("id").Value, Text = status.Element("text").Value, Source = status.Element("source").Value, UserID = status.Element("user").Element("id").Value, UserName = status.Element("user").Element("screen_name").Value, profile_image_url = status.Element("user").Element("profile_image_url").Value, profileImage = new BitmapImage(new Uri(status.Element("user").Element("profile_image_url").Value)) }).ToList(); DataGridStatus.ItemsSource = statusList; StatusMessage.Text = "Datagrid refreshed."; toggleProgressBar(false); } We are here creating a new bitmap image from the image url and creating a new Status object for every status and binding them to the data grid. Refer to the Twitter API documentation here. You can choose any column you want. 11) Until now, we’ve been using the auto generate columns for the data grid, but if you want it to be really cool, you need to define the columns with templates, etc… <data:DataGrid AutoGenerateColumns="False" Name="DataGridStatus" Height="Auto" MinWidth="400"> <data:DataGrid.Columns> <data:DataGridTemplateColumn Width="50" Header=""> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Image Source="{Binding profileImage}" Width="50" Height="50" Margin="1"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> <data:DataGridTextColumn Width="Auto" Header="User Name" Binding="{Binding UserName}" /> <data:DataGridTemplateColumn MinWidth="300" Width="Auto" Header="Status"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock TextWrapping="Wrap" Text="{Binding Text}"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> I’ve used only three columns – Profile image, Username, Status text. Now our Datagrid will look super cool like this. Coincidentally,  Tim Heuer is on the screenshot , who is a Silverlight Guru and works on SL team in Microsoft. His blog is really super. Here is the zipped file for all the xaml, xaml.cs & class files pages. Ok let us stop here for now, will look into implementing few more features in the next few posts and then I am going to look into developing a ASP.NET MVC 2 application. Hope you all liked this post. If you have any queries / suggestions feel free to comment below or contact me. Cheers! Technorati Tags: Silverlight,LINQ,Twitter API,Twitter,Silverlight 4

    Read the article

  • Can't deploy rails 4 app on Bluehost with Passenger 4 and nginx

    - by user2205763
    I am at Bluehost (dedicated server) trying to run a rails 4 app. I asked to have my server re-imaged, specifying that I do not want rails, ruby, or passenger install automatically as I wanted to install the latest versions myself using a version manager (Bluehost by default offers rails 2.3, ruby 1.8, and passenger 3, which won't work with my app). I installed ruby 1.9.3p327, rails 4.0.0, and passenger 4.0.5. I can verify this by typing, "ruby -v", "rails -v", and "passenger -v" (also "gem -v"). I made sure to install these not as root, so that I don't get a 403 forbidden error when trying to deploy the app. I installed passenger by typing "gem install passenger", and then installed the nginx passenger module (into "/nginx") with "passenger-install-nginx-module". I am trying to run my rails app on a subdomain, http://development.thegraduate.hk (I am using the subdomain to show my client progress on the website). In bluehost I created that subdomain, and had it point to "public_html/thegraduate". I then created a symlink from "rails_apps/thegraduate/public" to "public_html/thegraduate" and verified that the symlink exists. The problem is: when I go to http://development.thegraduate.hk, I get a directory listing. There is nothing resembling a rails app. I have not added a .htaccess file to /rails_apps/thegraduate/public, as that was never specified in the installation of passenger. It was meant to be 'install and go'. When I type "passenger-memory-status", I get 3 things: - Apache processes (7) - Nginx processes (0) - Passenger processes (0) So it appears that nginx and passenger are not running, and I can't figure out how to get it to run (I'm not looking to have it run as a standalone server). Here is my nginx.conf file (/nginx/conf/nginx.conf): #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /home/thegrad4/.rbenv/versions/1.9.3-p327/lib/ruby/gems/1.9.1/gems/passenger-4.0.5; passenger_ruby /home/thegrad4/.rbenv/versions/1.9.3-p327/bin/ruby; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name development.thegraduate.hk; root ~/rails_apps/thegraduate/public; passenger_enabled on; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } I don't get any errors, just the directory listing. I've tried to be as detailed as possible. Any help on this issue would be greatly appreciated as I've been stumped for the past 3 days. Scouring the web has not helped as my issue seems to be specific to me. Thanks so much. If there are any potential details I forgot to specify, just ask. ** ADDITIONAL INFORMATION ** Going to development.thegraduate.hk/public/ will correctly display the index.html page in /rails_apps/thegraduate/public. However, changing root in the routes.rb file to "root = 'home#index'" does nothing.

    Read the article

  • Oracle Leader in Transportation Management

    - by John Murphy
    Oracle Named a Leader in the Transportation Management Systems Market by Leading Analyst Firm Redwood Shores, Calif. – October 15, 2012 News Facts Gartner, Inc. has placed Oracle Transportation Management in the Leaders Quadrant of its 2012 report, “Magic Quadrant for Transportation Management Systems (TMS).” (1) Gartner Magic Quadrants position vendors within a particular market segment based on their completeness of vision and ability to execute on that vision. According to the report, “Multiple subcomponents make up a comprehensive TMS across planning (for example, load consolidation, routing, mode selection and carrier selection) and execution (for example, tendering loads to carriers, shipment track and trace, and freight audit and payment).” Built on modern, flexible, Internet based architecture, Oracle Transportation Management is a global transportation and logistics operations system that allows companies to minimize cost, optimize service levels, support sustainability initiatives, and create flexible business process automation within their transportation and logistics networks. With a share of 26% of worldwide software revenue for 2011, Oracle is also number one in TMS vendor share according to Gartner’s report, “Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016.” (2) Supporting Quote “Shippers and logistics service providers face increasingly complex challenges as they try to reduce costs, secure capacity and improve overall freight efficiency,” said Derek Gittoes, vice president, logistics product strategy, Oracle. “We believe our high standing in both Gartner reports is a reflection of Oracle’s commitment to addressing these challenges by delivering the industry’s broadest and deepest transportation management platform. With a flexible and modern platform, we are able to support customers with both basic transportation needs, as well as those with highly complex logistics requirements.” Supporting Resources Magic Quadrant for Transportation Management Systems Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016 Oracle Transportation Management (1) Gartner, Inc., “Magic Quadrant for Transportation Management Systems,” by C. Dwight Klappich, August 23, 2012 (2) Gartner, Inc., “Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016,” by Chad Eschinger and C. Dwight Klappich, September 24, 2012. About Oracle Applications Over 65,000 customers worldwide rely on Oracle's complete, open and integrated enterprise applications to achieve superior results. Oracle provides a secure path for customers to benefit from the latest technology advances that improve the customer software experience and drive better business performance. Oracle Applications Unlimited is Oracle's commitment to customer choice through continuous investment and innovation in current applications offerings. Oracle's next-generation Fusion Applications build upon that commitment, and are designed to work with and evolve Oracle's Applications Unlimited offerings. Oracle's lifetime support policy helps ensure customers will continue to have a choice in upgrade paths, based on their enterprise needs. For more information on the latest Oracle Applications releases go towww.oracle.com/applications About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NASDAQ:ORCL), visit www.oracle.com. Trademarks Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. ###   Karen [email protected] Simon JonesBlanc & [email protected]

    Read the article

  • Commandline Purge in AS11

    - by Dheeraj Kumar
    AS11 - B2B offering consists of numerous features that have been made available via commandline approach. Most of these are supplement to the already available User Interface based approach. One such is purging of runtime data. The commandline purge option enables the users to purge the runtime data, based on various criteria. This is an ANT based command, provides the flexibility to selectively set the criteria to purge the runtime data. Providing the command line option also enables the administrator to purge in bulk, without visiting the B2B UI, which can also be used for automation purpose By default archival is turned on for purge activity. As a pre-requisite, the respective folder needs to be configured in database with the proper permission. When no filename is provided for archived data, the sysdate will be considered for filename. Below are the various options to purge the runtime data Normal 0 Option ANT option   Message state -Dmsgstate   Date range -Dfromdate,  -Dtodate Format : dd/mm/yyyy hh:mm AM/PM Trading partner -Dtp   Direction -Ddirection   Message Type -Dmsgtype   Agreement Name -Dagreement   IdType/ value -Didtype,  -Didvalue   Archive -Darchive True/false By default true Archive file name -Darchivename File name (optional), will be used when archive is set to true. Normal 0 Note: When using -Darchivename the value must be a unique file name. An existing file name used with -Darchivename throws an exception v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 Below are the few of ant commands and various options.   Purge based on date range and message state: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dfromdate="19/12/2009 1:04 AM" -Dtodate="19/12/2009 1:05 AM" -Dmsgstate=MSG_COMPLETE -Darchivename="filename.dmp"  Purge based on direction: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Ddirection="OUTBOUND" Normal 0 Purge based on agreement Name: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dagreement="agreement_name" Normal 0 Purge based on Trading partner Name: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dtp=GlobalChips Normal 0 Purge based on Message State: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dmsgstate="MSG_COMPLETE" Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Ddirection="OUTBOUND" -Dmsgstate="MSG_COMPLETE"

    Read the article

  • WebCenter Customer Spotlight: Los Angeles Department of Water and Power

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution Summary Los Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. The goal of the project was to implement a newly designed web portal to increase customer self-service while reducing transactions via IVR and automate many of the paper based processes to web based workflows for their 1.6 million customers. LADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content and Oracle SOA Suite for the integration of their complex back-end systems infrastructure. The new portal has received extremely positive feedback from not only the customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Company OverviewLos Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another department in the city of Los Angeles.  Business ChallengesThe goal of the project was to implement a newly designed web portal that is easy to navigate from a web browser and mobile devices, as well as be the platform for surfacing internet and intranet applications at LADWP. The primary objective of the new portal was to increase customer self-service while reducing the transactions via IVR and walk-up and to automate many of the paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account (Bill Pay, Payment History, Bill History, Usage analysis, Service Request Management) Financial Assistance Programs Customer Rebate Programs Turn Off/Turn On/Transfer of Services Outage Reporting eNotification (SMS, email) Solution DeployedLADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content. Using Oracle SOA Suite they integrated various back-end systems including Oracle Siebel CRM IBM Mainframe based CIS FILENET for document management EBP Eletronic Bill Payment System HP Imprint System for BillXML data Other systems including outage reporting systems, SMS service, etc. The new portal’s features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser) Business ResultsThe new portal has received extremely positive feedback from not only customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Additional Information LADWP OpenWorld presentation Oracle WebCenter Portal Oracle WebCenter Content Oracle SOA Suite

    Read the article

  • Using Oracle BPM to Extend Oracle Applications

    - by Michelle Kimihira
    Author: Srikant Subramaniam, Senior Principal Product Manager, Oracle Fusion Middleware Customers often modify applications to meet their specific business needs - varying regulatory requirements, unique business processes, product mix transition, etc. Traditional implementation practices for such modifications are typically invasive in nature and introduce risk into projects, affect time-to-market and ease of use, and ultimately increase the costs of running and maintaining the applications. Another downside of these traditional implementation practices is that they literally cast the application in stone, making it difficult for end-users to tailor their individual work environments to meet specific needs, without getting IT involved. For many businesses, however, IT lacks the capacity to support such rapid business changes. As a result, adopting innovative solutions to change the economics of customization becomes an imperative rather than a choice. Let's look at a banking process in Siebel Financial Services and Oracle Policy Automation (OPA) using Oracle Business Process Management. This approach makes modifications simple, quick to implement and easy to maintain/upgrade. The process model is based on the Loan Origination Process Accelerator, i.e., a set of ready to deploy business solutions developed by Oracle using Business Process Management (BPM) 11g, containing customizable and extensible pre-built processes to fit specific customer requirements. This use case is a branch-based loan origination process. Origination includes a number of steps ranging from accepting a loan application, applicant identity and background verification (Know Your Customer), credit assessment, risk evaluation and the eventual disbursal of funds (or rejection of the application). We use BPM to model all of these individual tasks and integrate (via web services) with: Siebel Financial Services and (simulated) backend applications: FLEXCUBE for loan management, Background Verification and Credit Rating. The process flow starts in Siebel when a customer applies for loan, switches to OPA for eligibility verification and product recommendations, before handing it off to BPM for approvals. OPA Connector for Siebel simplifies integration with Siebel’s web services framework by saving directly into Siebel the results from the self-service interview. This combination of user input and product recommendation invokes the BPM process for loan origination. At the end of the approval process, we update Siebel and the financial app to complete the loop. We use BPM Process Spaces to display role-specific data via dashboards, including the ability to track the status of a given process (flow trace). Loan Underwriters have visibility into the product mix (loan categories), status of loan applications (count of approved/rejected/pending), volume and values of loans approved per processing center, processing times, requested vs. approved amount and other relevant business metrics. Summary Oracle recommends the use of Fusion Middleware as an extensions platform for applications. This approach makes modifications simple, quick to implement and easy to maintain/upgrade applications (by moving customizations away from applications to the process layer). It is also easier to manage processes that span multiple applications by using Oracle BPM. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Unit Testing TSQL

    - by Grant Fritchey
    I went through a period of time where I spent a lot of effort figuring out how to set up unit tests for TSQL. It wasn't easy. There are a few tools out there that help, but mostly it involves lots of programming. well, not as much as before. Thanks to the latest Down Tools Week at Red Gate a new utility has been built and released into the wild, SQL Test. Like a lot of the new tools coming out of Red Gate these days, this one is directly integrated into SSMS, which means you're working where you're comfortable and where you already have lots of tools at your disposal. After the install, when you launch SSMS and get connected, you're prompted to install the tSQLt example database. Go for it. It's a quick way to see how the tool works. I'd suggest using it. It' gives you a quick leg up. The concepts are pretty straight forward. There are a series of CLR commands that you use to configure a test and the test assertions. In between you're calling TSQL, either calls to your structure, queries, or stored procedures. They already have the one things that I always found wanting in database tests, a way to compare tables of results. I also like the ability to create a dummy copy of tables for the tests. It lets you control structures and behaviors so that the tests are more focused. One of the issues I always ran into with the other testing tools is that setting up the tests might require potentially destructive changes to the structure of the database (dropping FKs, etc.) which added lots of time and effort to setting up the tests, making testing more difficult, and therefor, less useful. Functionally, this is pretty similar to the Visual Studio tests and TSQLUnit tests that I used to use. The primary improvement over the Visual Studio tests is that I'm working in SSMS instead of Visual Studio. The primary improvement over TSQLUnit is the SQL Test interface it self. A lot of the functionality is the same, but having a sweet little tool to manage & run the tests from makes a huge difference. Oh, and don't worry. You can still run these tests directly from TSQL too, so automation has not gone away. I'm still thinking about how I'd use this in a dev environment where I also had source control to fret. That might be another blog post right there. I'm just getting started with SQL Test, so this is the first of several blog posts & videos. Watch this space. Try the tool.

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves - June 16-22, 2013

    - by Bob Rhubart
    2,819 people now follow OTN ArchBeat on Facebook. These are the Top 10 most popular items shared there for the week of June 16-22, 2013. Getting started with Java EE 7: Hands-on in 10 minutes | Lucas Jellema Oracle ACE Director and prolific blogger Lucas Jellema offers his take on the Java EE7 release and shares tips and resources to help you on your way. Not ‘how’ but ‘why’ should you upgrade to JDeveloper & ADF 11.1.1.7.0 | Chris Muir Oracle ACE Director Tim Hall and Oracle ADF Product Manager Chris Muir collaborated on this dialog that just might help you in your decision. OTN Architect Day: Cloud Computing - July 9, Redwood Shores, CA You won't need 3D glasses to see the technical sessions at OTN Architect Day: Cloud Computing, July 9, 2013. Redwood Shores, CA. It's free! It's live! Register now! Video: Frédéric Desbiens: Bringing Java to On-Device iOS and Android Apps (QCon NYC 2013) Oracle Application Development Tools product manager Frédéric Desbiens recaps his QCon New York presentation about how Java developers can leverage existing skills to develop enterprise mobile applications. OEPE 12.1.1.2.2 with GlassFish Tools released | Peter Benedikovic Peter Benedikovic's brief post offers an overview of some of the features in the new version of Oracle Enterprise Pack for Eclipse, released in conjunction with the release of Java EE 7. Oracle Enterprise Manager 12c Configuration Best Practices (Part 2 of 3) | Bethany Lapaglia Part 2 of Beth Lapaglia's 3-part series on the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment focuses on recommended WebLogic Server changes. Video: Doug Clarke: Polyglot Persistence: From NoSQL to HTML5 (QCon NYC 2013) Doug Clarke, EclipseLink Project Lead and Oracle Director of Product Management gives a very condensed version of his QCon New York presentation on "Polyglot Persistence: From NoSQL to HTML5." Podcast Show Notes: DevOps, Cloud, and Role Creep - Part 2 Automation and innovation had a huge impact on the manufacturing jobs of years gone by. Is something similar happening to some IT jobs? Oracle ACE Directors Ron Batra, Basheer Khan, and Cary Millsap discuss what's happening in part 2 of this 3-part podcast. Video: Reza Rahman: Building Java HTML5/WebSocket Applications with JSR 356 (QCon NYC 2013) Java EE/GlassFish evangelist Reza Rahman talks about how WebSocket provides "the basis for a new generation of interactive and live Web applications" for mobile developers. Lessons from Fusion HCM Implementations | Tim Warner Oracle ACE Tim Warner shares summaries of the Fusion HCM implementation experiences of several companies, as detailed in presentations at the 2013 Oracle HCM Users Group Conference. Thought for the Day "If the mind really is the finest computer, then there are a lot of people out there who need to be rebooted." — Tim Bryce Source: softwarequotes.com

    Read the article

  • apache tomcat loadbalancing clustering on ubuntu

    - by user740010
    i am facing a problem in clustering the tomcat with apache as a loadbalancer using mod_jk on ubuntu. i have install apache2 on my ubuntu 11.04 and i have downloaded tomcat7 created two copies and kept them at two different location. 1st one is at /home/net4u/vishal/test/tomcatA 2nd one is at /home/net4u/vishal/test1/tomcatB i have made following changes to server.xml file in /conf folder 1. <Server port="8205" shutdown="SHUTDOWN"> 2. <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> 3.<Connector port="8209" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> 4. <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> similarly i have modified other tomcat i.e tomcatA server.xml content of the server.xml is as follow: -- <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- uncomment for clustering--> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> <!-- Use the LockOutRealm to prevent attempts to guess user passwords via a brute-force attack --> <Realm className="org.apache.catalina.realm.LockOutRealm"> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> </Host> </Engine> i have install libapache2-mod-jk step 1. i have Created jk.load file in /etc/apache2/mods-enabled/jk.load content is as follows: LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so Create /etc/apache2/mods-enabled/jk.conf: JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/jk.log JkMount /ecommerce/* worker1 JkMount /images/* worker1 JkMount /content/* worker1 step 2. Created workers.properties file in /etc/apache2/workers.properties content is as follows: workers.tomcat_home=/home/vishal/Desktop/test/tomcatA workers.java_home=/usr/lib/jvm/default-java ps=/ worker.list=tomcatA,tomcatB,loadbalancer   worker.tomcatA.port=8109 worker.tomcatA.host=localhost worker.tomcatA.type=ajp13 worker.tomcatA.lbfactor=1   worker.tomcatB.port=8209 worker.tomcatB.host=localhost worker.tomcatB.type=ajp13 worker.tomcatB.lbfactor=1 worker.loadbalancer.type=lb worker.loadbalancer.balanced_workers=tomcatA,tomcatB worker.loadbalancer.sticky_session=1 i tried the same thing on the windows machine it is working.

    Read the article

  • Cannot log in to the desktop on ubuntu 11.10?

    - by Jichao
    The problem is, I could log in under the terminal, i could ifup eth0, i could do anything I want in the terminal, but if I use ctrl+alt+f7 goto the gnome login screen, after I input the correct password, the system just send me back to same login screen again. I have created a new user, but it didn't work. I have change all the files under ~/ to jichao:jichao(which is my username) with chown -hR jichao:jichao /home/jichao, but it didn't work too. I searched the internet, somebody said I should see the logs under /var/log/gdm, but there is not a /var/log/gdm directory in my box. Here are the tail of files under /var/log/ tail X.org.log [ 3263.348] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so [ 3263.348] (**) Dell Dell USB Keyboard: always reports core events [ 3263.348] (**) Dell Dell USB Keyboard: Device: "/dev/input/event5" [ 3263.348] (--) Dell Dell USB Keyboard: Found keys [ 3263.348] (II) Dell Dell USB Keyboard: Configuring as keyboard [ 3263.348] (**) Option "config_info" "udev:/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0/input/input29/event5" [ 3263.348] (II) XINPUT: Adding extended input device "Dell Dell USB Keyboard" (type: KEYBOARD) [ 3263.348] (**) Option "xkb_rules" "evdev" [ 3263.348] (**) Option "xkb_model" "pc105" [ 3263.348] (**) Option "xkb_layout" "us" kern.log Mar 20 09:32:58 jichao-MS-730 kernel: [ 3182.701247] input: Dell Dell USB Keyboard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0/input/input27 Mar 20 09:32:58 jichao-MS-730 kernel: [ 3182.701392] generic-usb 0003:413C:2003.0018: input,hidraw1: USB HID v1.10 Keyboard [Dell Dell USB Keyboard] on usb-0000:00:1d.0-1.4/input0 Mar 20 09:33:02 jichao-MS-730 kernel: [ 3186.642572] usb 2-1.3: new low speed USB device number 17 using ehci_hcd Mar 20 09:33:02 jichao-MS-730 kernel: [ 3186.741892] input: Microsoft Microsoft 5-Button Mouse with IntelliEye(TM) as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.0/input/input28 Mar 20 09:33:02 jichao-MS-730 kernel: [ 3186.742080] generic-usb 0003:045E:0047.0019: input,hidraw2: USB HID v1.10 Mouse [Microsoft Microsoft 5-Button Mouse with IntelliEye(TM)] on usb-0000:00:1d.0-1.3/input0 Mar 20 09:33:27 jichao-MS-730 kernel: [ 3212.473901] usb 2-1.3: USB disconnect, device number 17 Mar 20 09:33:28 jichao-MS-730 kernel: [ 3212.702031] usb 2-1.4: USB disconnect, device number 16 Mar 20 09:34:08 jichao-MS-730 kernel: [ 3253.022655] usb 2-1.4: new low speed USB device number 18 using ehci_hcd Mar 20 09:34:08 jichao-MS-730 kernel: [ 3253.124278] input: Dell Dell USB Keyboard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0/input/input29 Mar 20 09:34:08 jichao-MS-730 kernel: [ 3253.124423] generic-usb 0003:413C:2003.001A: input,hidraw1: USB HID v1.10 Keyboard [Dell Dell USB Keyboard] on usb-0000:00:1d.0-1.4/input0 Mar 20 09:33:02 jichao-MS-730 kernel: [ 3186.741892] input: Microsoft Microsoft 5-Button Mouse with IntelliEye(TM) as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.0/input/input28 Mar 20 09:33:02 jichao-MS-730 kernel: [ 3186.742080] generic-usb 0003:045E:0047.0019: input,hidraw2: USB HID v1.10 Mouse [Microsoft Microsoft 5-Button Mouse with IntelliEye(TM)] on usb-0000:00:1d.0-1.3/input0 syslog Mar 20 09:33:02 jichao-MS-730 mtp-probe: bus: 2, device: 17 was not an MTP device Mar 20 09:33:27 jichao-MS-730 kernel: [ 3212.473901] usb 2-1.3: USB disconnect, device number 17 Mar 20 09:33:28 jichao-MS-730 kernel: [ 3212.702031] usb 2-1.4: USB disconnect, device number 16 Mar 20 09:34:08 jichao-MS-730 kernel: [ 3253.022655] usb 2-1.4: new low speed USB device number 18 using ehci_hcd Mar 20 09:34:08 jichao-MS-730 mtp-probe: checking bus 2, device 18: "/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4" Mar 20 09:34:08 jichao-MS-730 mtp-probe: bus: 2, device: 18 was not an MTP device Mar 20 09:34:08 jichao-MS-730 kernel: [ 3253.124278] input: Dell Dell USB Keyboard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0/input/input29 Mar 20 09:34:08 jichao-MS-730 kernel: [ 3253.124423] generic-usb 0003:413C:2003.001A: input,hidraw1: USB HID v1.10 Keyboard [Dell Dell USB Keyboard] on usb-0000:00:1d.0-1.4/input0 auth.log Mar 20 09:18:52 jichao-MS-730 lightdm: pam_ck_connector(lightdm-autologin:session): nox11 mode, ignoring PAM_TTY :0 Mar 20 09:18:53 jichao-MS-730 lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "jichao" Mar 20 09:18:53 jichao-MS-730 dbus[835]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.240" (uid=104 pid=6457 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.11" (uid=0 pid=1156 comm="/usr/sbin/console-kit-daemon --no-daemon ") Mar 20 09:19:38 jichao-MS-730 sudo: jichao : TTY=tty6 ; PWD=/home ; USER=root ; COMMAND=/bin/chown -hR jichao:jichao jicha Mar 20 09:19:39 jichao-MS-730 sudo: jichao : TTY=tty6 ; PWD=/home ; USER=root ; COMMAND=/bin/chown -hR jichao:jichao jichao Mar 20 09:20:10 jichao-MS-730 lightdm: pam_unix(lightdm-autologin:session): session closed for user lightdm Mar 20 09:20:11 jichao-MS-730 lightdm: pam_unix(lightdm-autologin:session): session opened for user lightdm by (uid=0) Mar 20 09:20:11 jichao-MS-730 lightdm: pam_ck_connector(lightdm-autologin:session): nox11 mode, ignoring PAM_TTY :0 Mar 20 09:20:12 jichao-MS-730 lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "jichao" Mar 20 09:20:12 jichao-MS-730 dbus[835]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.247" (uid=104 pid=6572 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.11" (uid=0 pid=1156 comm="/usr/sbin/console-kit-daemon --no-daemon ") It seems that my .xsession-errors does not grow since yesterday. Here is my .xsession-error: (gnome-settings-daemon:1550): Gdk-WARNING **: The program 'gnome-settings-daemon' received an X Window System error. This probably reflects a bug in the program. The error was 'BadWindow (invalid Window parameter)'. (Details: serial 26702 error_code 3 request_code 2 minor_code 0) (Note to programmers: normally, X errors are reported asynchronously; that is, you will receive the error a while after causing it. To debug your program, run it with the --sync command line option to change this behavior. You can then get a meaningful backtrace from your debugger if you break on the gdk_x_error() function.) (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed (nautilus:3106): GLib-GObject-CRITICAL **: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed WARN 2012-03-17 19:28:46 glib <unknown>:0 Unable to fetch children: Method "Children" with signature "" on interface "org.ayatana.bamf.view" doesn't exist WARN 2012-03-17 19:28:46 glib <unknown>:0 Unable to fetch children: Method "Children" with signature "" on interface "org.ayatana.bamf.view" doesn't exist (yunio:2430): Gtk-WARNING **: ??????????????:“pixmap”, (yunio:2430): Gtk-WARNING **: ??????????????:“pixmap”, (polkit-gnome-authentication-agent-1:1601): Gtk-WARNING **: ??????????????:“pixmap”, (yunio:2430): Gtk-WARNING **: ??????????????:“pixmap”, (yunio:2430): Gtk-WARNING **: ??????????????:“pixmap”, (polkit-gnome-authentication-agent-1:1601): Gtk-WARNING **: ??????????????:“pixmap”, (polkit-gnome-authentication-agent-1:1601): Gtk-WARNING **: ??????????????:“pixmap”, (polkit-gnome-authentication-agent-1:1601): Gtk-WARNING **: ??????????????:“pixmap”, /usr/share/system-config-printer/applet.py:336: GtkWarning: ??????????????:“pixmap”, self.loop.run () (unity-window-decorator:1652): Gtk-WARNING **: ??????????????:“pixmap”, (unity-window-decorator:1652): Gtk-WARNING **: ??????????????:“pixmap”, (unity-window-decorator:1652): Gtk-WARNING **: ??????????????:“pixmap”, (unity-window-decorator:1652): Gtk-WARNING **: ??????????????:“pixmap”, common-plugin-Message: checking whether we have a device for 4: yes common-plugin-Message: checking whether we have a device for 5: yes common-plugin-Message: checking whether we have a device for 6: yes common-plugin-Message: checking whether we have a device for 7: yes common-plugin-Message: checking whether we have a device for 10: yes common-plugin-Message: checking whether we have a device for 8: yes common-plugin-Message: checking whether we have a device for 9: yes (gnome-settings-daemon:13791): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed [1331983727,000,xklavier.c:xkl_engine_start_listen/] The backend does not require manual layout management - but it is provided by the application ** (gnome-fallback-mount-helper:1584): DEBUG: ConsoleKit session is active 0 (gnome-fallback-mount-helper:1584): Gdk-WARNING **: gnome-fallback-mount-helper: Fatal IO error 11 (???????) on X server :0. (gdu-notification-daemon:1708): Gdk-WARNING **: gdu-notification-daemon: Fatal IO error 11 (???????) on X server :0. unity-window-decorator: Fatal IO error 11 (???????) on X server :0.0. (bluetooth-applet:1583): Gdk-WARNING **: bluetooth-applet: Fatal IO error 11 (???????) on X server :0. (nm-applet:1596): Gdk-WARNING **: nm-applet: Fatal IO error 11 (???????) on X server :0. (nautilus:3106): IBUS-WARNING **: _connection_closed_cb: Underlying GIOStream returned 0 bytes on an async read (update-notifier:1821): Gdk-WARNING **: update-notifier: Fatal IO error 11 (???????) on X server :0. applet.py: Fatal IO error 11 (???????) on X server :0. (nautilus:3106): Gdk-WARNING **: nautilus: Fatal IO error 11 (???????) on X server :0. Could you help me, Thanks.

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • Spotlight on RIVA: CRM integration for Oracle CRM on Demand and Microsoft Exchange

    - by Richard Lefebvre
    Introducing Riva from Omni - an Oracle ISV partner specializing in Enterprise Management and Integration Solutions Riva delivers advanced, server-side integration for Oracle CRM On Demand and Microsoft Exchange or even Novell GroupWise. Riva allows Oracle customers to go beyond the standard Outlook plug-in to deliver additional value for the end user as they interact between Outlook and CRM On Demand. Riva syncs CRM On Demand to ALL Exchange mail apps, not just Windows Outlook.  So, whether customers are using Outlook 2010, Outlook Web Access (web client), Outlook 2011 for Mac, Apple Mail, Outlook on Citrix  or a mobile device, Riva's got them covered. There are no plug-ins to be installed, configured, managed and maintained on users' desktops, laptops as Riva delivers Server-side synchronisation for CRMOD and Exchange. The automation of CRM and Outlook integration will remove the reliance upon users to synchronise between the two with Riva handling this process. Riva allows administrators to define sync policies and apply them to individuals or groups of users depending on their sync requirements. Administrators will be able to determine and manage the exposure of the most pertinent detail to be synchronised between Outlook and CRM On Demand. Custom and organic contact filtering for large deployments i.e. Based on ownership, groupings and contact frequency, filters can be applied on what contact records are shared with the users. Riva provides the capability to synchronise CRM and Outlook beyond Contacts, Calendar entries and Email. The synchronisation can be extended to cater for  opportunities, quotes and custom objects for example within the Outlook interface. Riva SmartConvert Folders can automate the creation of opportunities and associated contacts for example if they don't already exist. This can facilitate a reduction in manual detail entry through quick association whilst also benefiting user adoption. From a mobile perspective, Riva allows users to view and manage their CRM On Demand contacts, calendar, tasks, opportunities and cases from iPad, iPhone, Android and BlackBerry devices.  Again, there are no mobile apps or additional plugins to install, configure or manage. We sync CRM On Demand to Exchange.  Because the mobile device is connected to an Exchange mailbox, the information automatically syncs down to the native address book, calendar and mail apps on the smartphone or tablet. Riva Datasheet for CRM On Demand Riva Brochure – Oracle CRM On Demand  Technical Knowledgebase & Riva Trial  http://kb.omni-ts.com/47/ Comparison to Outlook Plug-ins Riva Diagram – Riva Comparison with Outlook Plug-ins Contact: Wolfgang Berger - [email protected]

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-21

    - by Bob Rhubart
    The Real Architects of Los Angeles: OTN Architect Day in LA - Oct 25 No gossip. No drama. No hair pulling. Just a full day of technical sessions and peer interaction focused on using Oracle technologies in today's cloud and SOA architectures. The event is free, but seating is limited, so register now. Thursday October 25, 2012. 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Why IT is a profession in 'flux' | ZDNet I usuallly don't post two items from the same person in one day, but this post from ZDNet blogger Joe McKendrick deals with some critical issues affecting those in IT. As McKendrick puts it: "IT professionals are under considerable pressure to deliver more value to the business, versus being good at coding and testing and deploying and integrating." Cloud, automation drive new growth in SOA governance market | ZDNet "SOA governance tools and processes learned over the past decade are now underpinning cloud projects as they scale across enterprises," reports Joe McKendrick. But there remains a lack of understanding about SOA Governance. For a broader discussion of the importance of IT governance check out the lastest OTN Archbeat Podcast: By Any Other Name: Governance and Architecture How I Simplified the Installation of Oracle Database on Oracle Linux 6 Michele Casey's update of Ginny Henningsen's original article. This version shows how to simplify the installation of Oracle Database 11g on Oracle Linux 6 by installing the oracle-rdbms-server-11gR2-preinstall RPM package. Fault Handling Slides and Q&A | Ronald van Luttikhuizen Oracle ACE Director Ronald van Luttikhuizen shares the slides and a Q&A transcript from a presentation he and fellow ACE Director Guido Schmutz gave at the recent Oracle OpenWorld and JavaOne preview event organized by AMIS Technology. BPM ADF Task forms. Checking whether the current user is in a BPM Swimlane | Christopher Karl Chan "The tricky part here is that the ADF Task Details Form is in fact part of a separate J2EE application to the main workspace," says Oracle Fusion Middleware A-Team member Christopher Karl Chan. "So if you try to use Java or Expression Language to get the logged in user you will only find anonymous and none of the BPM Roles you will be expecting. So what to do?" Don't guess—read the post and find out. Thought for the Day "The speed of change makes you wonder what will become of architecture. " — Tadao Ando Source: BrainyQuote

    Read the article

  • EPM Planning (Hyperion) V11.1.2 Implementation Hands-On Boot-camp

    - by Mike.Hallett(at)Oracle-BI&EPM
    5-Day Training for Partners: 29th October - 2nd November 2012, London (UK): REGISTER Here This FREE for Partners 5-day workshop is designed to provide implementation instruction on Oracle Hyperion EPM Planning.  This boot-camp is intended for prospective implementers of the Planning and Budgeting functionality of Oracle EPM or implementers that are currently familiar with the basics of EPM Planning and looking to strengthen their base of knowledge in the product. The class begins with an overview of Essbase, the foundation of Hyperion Planning. It provides a general overview of Planning and Planning terms, the architecture of all the Planning components, and how they are commonly used. The course goes over all the steps to create an application from scratch. This involves some preparation work outside of Planning and leads to developing the application in both the Planning Windows and Web clients. Participants will modify existing dimensions and build out the hierarchies using the Web client. Topics Covered The boot-camp shows developers how to build out dimensions using Classic Planning and by using EPMA. It covers the mechanics and cover strategies for automating the build process such as interface tables. It reviews data loads using Load Rules to load the Planning database. The course focuses on tasks that end-users must perform during the planning cycle. It walks students through creating and modifying forms, working with forms to enter data, adding annotations, and the rest of the form features such as running business rules and managing task lists. It covers how to use the forms in the Smart View client and finishes up the end-user perspective by going through Workflow Management and the process of submitting a plan for review. The final section of the course covers Security and other administration topics such as automation and deployment. Prerequisites Ideal participants are Oracle partners (SIs and resellers) with a background in business information systems and a clientele of customers with ongoing or prospective EPM initiatives. Alternatively, partners with the background described above and an interest in evolving their practice to a similar profile are suitable participants. Further online OPN guided learning path information and webinars are available at: Oracle Hyperion Planning 11 Essentials. Please note that attendees are required to bring a laptop. View here laptop requirements and detailed agenda. ·       REGISTER Here : acceptance is subject to availability and your place will be confirmed within two weeks  ( and for help see the Partner Registration Guide ). Training Location: Oracle Corporation UK Ltd Columbus Room Customer Visit Center 1 South Place London EC2M 2RB Training Dates: 29th October - 2nd November  9:30 am – 5:00 pm BST For more information please contact [email protected].

    Read the article

  • Puppet: Getting Started On Windows

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2014/08/07/puppet-getting-started-on-windows.aspxNow that we’ve talked a little about Puppet. Let’s see how easy it is to get started. Install Puppet Let’s get Puppet Installed. There are two ways to do that: With Chocolatey: Open an administrative/elevated command shell and type: choco install puppet Download and install Puppet manually - http://puppetlabs.com/misc/download-options Run Puppet Let’s make pasting into a console window work with Control + V (like it should): choco install wincommandpaste If you have a cmd.exe command shell open, (and chocolatey installed) type: RefreshEnv The previous command will refresh your environment variables, ala Chocolatey v0.9.8.24+. If you were running PowerShell, there isn’t yet a refreshenv for you (one is coming though!). If you have to restart your CLI (command line interface) session or you installed Puppet manually open an administrative/elevated command shell and type: puppet resource user Output should look similar to a few of these: user { 'Administrator': ensure => 'present', comment => 'Built-in account for administering the computer/domain', groups => ['Administrators'], uid => 'S-1-5-21-some-numbers-yo-500', } Let's create a user: puppet apply -e "user {'bobbytables_123': ensure => present, groups => ['Users'], }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: created Run the 'puppet resource user' command again. Note the user we created is there! Let’s clean up after ourselves and remove that user we just created: puppet apply -e "user {'bobbytables_123': ensure => absent, }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: removed Run the 'puppet resource user' command one last time. Note we just removed a user! Conclusion You just did some configuration management /system administration. Welcome to the new world of awesome! Puppet is super easy to get started with. This is a taste so you can start seeing the power of automation and where you can go with it. We haven’t talked about resources, manifests (scripts), best practices and all of that yet. Next we are going to start to get into more extensive things with Puppet. Next time we’ll walk through getting a Vagrant environment up and running. That way we can do some crazier stuff and when we are done, we can just clean it up quickly.

    Read the article

  • Visual Studio 2010 Launch Events

    - by Jim Duffy
    Don’t miss out on the opportunity to learn about the new features in Visual Studio 2010. Check out the MSDN Events page and find out when the talented folks of the Developer & Evangelism group will be visiting your city to prove to you that /*Life Runs On Code*/. I’ll be attending the Raleigh event June 2, 2010 from 1:00 - 5:00 PM. North Carolina State University, Jane S. McKimmon Conference Center 1101 Gorman St Raleigh North Carolina 27606 United States From the Raleigh Event page: Event Overview Learn about the rich application platforms that Microsoft® Visual Studio® 2010 supports, including Windows® 7, the Web, SharePoint®, Windows Azure™, SQL®, and Windows® Phone 7 Series. From tighter tester and dev collaboration to new ALM tools, there’s a lot that’s new. Here’s what you can expect: Windows Development with Visual Studio 2010 Visual Studio has always been the best way to build compelling visual solutions for Windows. Visual Studio 2010 continues this trend with great new tooling support for Silverlight 4, WPF, and native development. In this demo heavy session, you’ll see how you can build rich Windows applications with Silverlight 4 using new trusted application features including out-of-browser execution, saving to the file system, and even COM Automation. You’ll also see how you can use the new Task Parallel Library from within a WPF application to take advantage of all those cores in today’s modern computers. Web and Cloud Development with Visual Studio 2010 If you build solutions for the web, then this session is for you. Come see how your existing skills move forward with Visual Studio 2010 both for in-house ASP.NET development and the new frontier of the Cloud. In this session, you’ll see improved designers, new HTML and JavaScript snippets, Web Forms enhancements, and how you can quickly build great web sites using Dynamic Data. You’ll see the changes made to testable web sites with MVC 2.0 and how we’ve integrated JQuery support into the platform. You’ll then see how easy it is to leverage your existing code and move to the cloud with Windows Azure. Windows Phone 7 Developer Tools and Platform Overview This session provides an overview of Visual Studio® 2010 for Windows Phone. Learn about the powerful capabilities of this new application platform and the developer tools experience including basic IDE usage, debugging, packaging, and deployment. This session also shows how you can use Microsoft Expression® Blend™ for Windows Phone to build great Silverlight applications. Have a day. :-|

    Read the article

  • Automate RAC Cluster Upgrades using EM12c

    - by HariSrinivasan
    One of the most arduous processes  in DB maintenance is upgrading Databases across major versions, especially for complex RAC Clusters.With the release of Database Plug-in  (12.1.0.5.0), EM12c Rel 3 (12.1.0.3.0)  now supports automated upgrading of RAC Clusters in addition to Standalone Databases. This automation includes: Upgrade of the complete Cluster across the nodes. ( Example: 11.1.0.7 CRS, ASM, RAC DB  ->   11.2.0.4 or 12.1.0.1 GI, RAC DB)  Best practices in tune with your operations, where you can automate upgrade in steps: Step 1: Upgrade the Clusterware to Grid Infrastructure (Allowing you to wait, test and then move to DBs). Step 2: Upgrade RAC DBs either separately or in group (Mass upgrade of RAC DB's in the cluster). Standard pre-requisite checks like Cluster Verification Utility (CVU) and RAC checks Division of Upgrade process into Non-downtime activities (like laying down the new Oracle Homes (OH), running checks) to Downtime Activities (like Upgrading Clusterware to GI, Upgrading RAC) there by lowering the downtime required. Ability to configure Back up and Restore options as a part of this upgrade process. You can choose to : a. Take Backup via this process (either Guaranteed Restore Point (GRP) or RMAN) b. Set the procedure to pause just before the upgrade step to allow you to take a custom backup c. Ignore backup completely, if there are external mechanisms already in place.  High Level Steps: Select the Procedure "Upgrade Database" from Database Provisioning Home page. Choose the Target Type for upgrade and the Destination version Pick and choose the Cluster, it picks up the complete topology since the clusterware/GI isn't upgraded already Select the Gold Image of the destination version for deploying both the GI and RAC OHs Specify new OH patch, credentials, choose the Restore and Backup options, if required provide additional pre and post scripts Set the Break points in the procedure execution to isolate Downtime activities Submit and track the procedure's execution status.  The animation below captures the steps in the wizard.  For step by step process and to understand the support matrix check this documentation link. Explore the functionality!! In the next blog, will talk about automating rolling Upgrades of Databases in Physical Standby Data Guard environment using Transient Logical Standby.

    Read the article

  • WebCenter Customer Spotlight: Hitachi Data Systems

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter Watch this Webcast to see a live demo on how HDS creates multilingual content for their 35+ regional websites  Solution SummaryHitachi Data Systems (HDS) provides mid-range and high-end storage systems, software and services. It is a wholly owned subsidiary of Hitachi Ltd. HDS is based in Santa Clara, California, and has over 5,300 employees in more then 100 countries and regions. HDS's main objectives were to provide a consistent message across all their sites, to maintain a tight governance structure across their messages and related content, expand the use of the existing content management systems and implement a centralized translation management system. HDS implemented a global web content management system based on Oracle WebCenter Content and integrated the Lingotek translation management system to manage their multilingual content. The implemented solution provides each Geo with the ability to expand their web offering to meet local market needs, while staying aligned with the Corporate Web Guidelines Company OverviewHitachi Data Systems (HDS) provides mid-range and high-end storage systems, software and services. It is a wholly owned subsidiary of Hitachi Ltd. and part of the Hitachi Information Systems & Telecommunications Division. The company sells through direct and indirect channels in more than 170 countries and regions. Its customers include of 50 percent of the Fortune 100 companies. HDS is based in Santa Clara California and has over 5,300 employees in more than 100 countries and regions. Business ChallengesHDS has over 35 global websites and the lack of global web capabilities led to inconsistency of messaging, slower time to market and failed to address local language needs. There was an extensive operational overhead due to manual and redundant processes. Translation efforts where superficial, inconsistent and wasteful and the lack of translation automation tools discouraged localization.  HDS's main objectives were to provide a consistent message across all their sites, to maintain a tight governance structure across their messages and related content, expand the use of the existing content management systems and implement a centralized translation management system. Solution DeployedHDS implemented a global web content management system based on Oracle WebCenter Content. The solution supports decentralized publishing for their 35+ global sites to address local market needs while ensuring editorial and brand review trough embedded review processes. They integrated the Lingotek translation management system into Oracle WebCenter Content to manage their multilingual content. Business Results Provides each Geo with the ability to expand their web offering to meet local market needs, while staying aligned with the Corporate Web Guidelines Enables end-to-end content lifecycle management across multiple languages Leverage translation memory for reuse and consistency Reduce time to market with central repository of translated content Additional Information HDS Webcast Oracle WebCenter Content Lingotek website

    Read the article

  • Do you know about the Visual Studio 2010 Database Projects Guidance?

    - by Martin Hinshelwood
    Early on in the Team System (now Visual Studio ALM) cycle a new product surfaced within Team System that was affectionately called “Data Dude”, but had the more formal name of “Visual Studio 2005 Team Edition for Database Professionals”. The purpose of this product was to try and make the database a “first class citizen” in the development world. Those that started using Visual Studio 2005 Team Edition for Database Professionals (Data Dude) loved it, but everyone else did not get it. The capabilities were a little patchy, but the one thing it did bring to the party was the ability to put your database schema under source control. This was revolutionary as previously your DBA sat as far away from the team as possible, and usually in a dark cupboard, now they could partake of all the goodness of Version Control, Work Item Tracking and automated builds. The problem was that the understanding required to manage these projects was very different to that needed previously. Then the Visual Studio ALM Rangers got a hold of it…and produced some of the best guidance available. Figure: Download the guidance from http://vsdatabaseguide.codeplex.com/ This guidance discusses scenarios and approaches of using the Database Projects in Visual Studio 2010 to help you use the tools more effectively and maximize their value to your organization This guidance is focused on these five areas: Solution and Project Management Source Code Control and Configuration Management Integrating External Changes with the Project System Build and Deployment Automation with Visual Studio Database Projects Database Testing and Deployment Verification Each of these areas has common guidance, usage scenarios, hands on labs, and lessons learned from real world engagements and the community discussions.   The guidance is broken down into three packages: Guidance documentation Hands-on-lab (HOL) documentation note: The documentation is available in XPS-only format packages or complete XPS,PDF,DOCX format packages HOL Package If you need assistance and no one else can help, then you may need to call the Visual Studio ALM Rangers. The Visual Studio ALM Rangers have the mission to provide out of band solutions for missing features or guidance. They are supported by Microsoft Product Group, Microsoft Consulting Services, Microsoft Most Valued Professionals (MVPs) and technical specialists from technology communities around the globe, giving you a real-world view from the field, where the technology has been tested and used. For more information on the Rangers please visit http://msdn.microsoft.com/en-us/vstudio/ee358786.aspx and for more a list of other Rangers projects please see http://msdn.microsoft.com/en-us/vstudio/ee358787.aspx.

    Read the article

  • MS in Computer Science after BE in electronics

    - by Abhinav
    I am doing my 3rd year Bachelors in Electronics and Electrical Communication but from the first year I have been interested in Computer Science. But at that time it was just my hobby. But in second year when I joined robotics my love for computer science rose. I with my team came in top three in 2 National Competition (Technical fests of different IITs) where we used Image Processing, Hardware interfacing etc. But then I realised that Computer Science is not just about coding. I took many lectures from online free schools like Udacity, Coursera in subjects related to Artificial Intelligence, Building a Search Engine, Design and Analysis of Algorithm, Programming a Robotic Car, Programming Languages, Machine Learning, Software Engineering as a Service, WebApps Engineering, Compilers, Applied Crypotography etc. I also did some courses in Core and Advanced Java in my second year from training institute. I will also be taking course in Statistics, Databases, Discrete Mathematics from 25th June. Now I realized how vast is the field of Computer Science and how efficient you become on deciding algorithms and classifying problems into different subfields which have been thoroughly researched so you don't always do brute force thing or naive programming. Now this field has become kind of passion for me. Adding to the fact I am also doing my 6 months internship in software field in Texas Instruments where I am working on Automation and Algorithms. I also have some 5-6 good college level projects in Softwares and Robotics. I also like Electronics but only some fields like Operating System(this subject was there in Electronics also), Micro Processor, Digital, Computer Architecture, DSPs etc. I really want to pursue MS in some field of Computer Science. I am giving GRE in October/November. Till now I have good CG of around 9.4/10 and my 1 year in college is still left. Do I have any chance that some good University in US will consider me for MS in field related to computer science or Robotics. Also Can you suggest somethings that I can do during this 1 year to increase my chances for MS or should I apply for EECS(Electrical Engineering and Computer Science) and then I can shift more towards Computer Science as my major option. My main aim is to do Phd after Ms in CS if I am able to do that somehow. I know that I have to put much extra effort to understand things in MS than CS undergraduates but I will do that with my full dedication, also when I communicate with my college CS students or during my internship period I didn't feel that I am missing very much stuff that they know and was very comfortable during my internship with software employees.

    Read the article

  • Sync Google Calendar with SharePoint Calendar

    - by dataintegration
    The ADO.NET Providers for Google and SharePoint make it easy to retrieve and update data in both Google's web services and SharePoint. This article shows how the SQL interface to data makes it easy to build applications that need to move data from one source to another. The application described here is a demo Windows application that synchronizes calendar events between Google and SharePoint, but the RSSBus Providers can be used to achieve integrations on both the .NET and the Java platforms, including more sophisticated features like full automation. Getting the Events Step 1: Google accounts can have several calendars. Obtain a list of a user's Google Calendars by issuing a query to the Calendars table. For example: SELECT * FROM Calendars. Step 2: In order to get a list of the events from a given Google Calendar, issue a query to the CalendarEvents table while specifying the CalendarId from the Calendars table. The resulting events can be further filtered by using the StartDateTime or EndDateTime columns. For example: SELECT * FROM CalendarEvents WHERE (CalendarId = '[email protected]') AND (StartDateTime >= '1/1/2012') AND (StartDateTime <= '2/1/2012') Step 3: SharePoint stores data in Lists. There are various types of lists, e.g., document lists and calendar lists. A SharePoint account can have several lists of the same type. To find all the calendar lists in SharePoint, use the ListLists stored procedure and inspect the BaseTemplate column. Step 4: The SharePoint data provider models each SharPoint list as a table. Get the events in a particular calendar by querying the table with the same name as the list. The events may be filtered further by specifying the EventDate or EndDate columns. For example: SELECT * FROM Calendar WHERE (EventDate >= '1/1/2012') AND (EventDate <= '2/1/2012') Synchronizing the Events Synchronizing the events is a simple process. Once the events from Google and SharePoint are available they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete events as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for SharePoint V2, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the SharePoint ADO.NET Data Provider V2, which can be obtained here.

    Read the article

  • Eloqua Sales Awareness Training for Partners - London, November 28-29

    - by Richard Lefebvre
    We are pleased to invite you to the free of charge Oracle Eloqua Sales Awareness Training for Partners - London, November 28&29 - COME LEARN WHAT ALL THE BUZZ IS ABOUT! WHEN SALES & MARKETING BOND, REVENUE GROWS It’s not one thing that makes a company successful with marketing automation – it’s the combination of the best implementation, flexible long-term support and access to a community of marketing innovation that ensures you have the assistance you need every step of the way. Our customers choose Oracle|Eloqua because they know when sales and marketing work together, leads are called upon, quotas are crushed and revenues climb. Learn how Oracle|Eloqua can help you put marketing to work for you! COVERED IN THIS TRAINING Eloqua and the Customer Experience Strategy How Modern Marketing Works Introduction to Eloqua – Whiteboard POV Go To Market Playbook Competitive Landscape Integration Options Service Offerings With case studies, workshops and product demos to help you on your way to a new world of marketing expertise LOGISTICS If you are ready to join us on this journey, now’s the time to let us know! Tell us a little about your experience – complete this form to help us know you better (Please ignore if your firm has already completed) Complete this form to RSVP by Thursday, November 21, 2013. Find out more about the Oracle|Eloqua, join the Oracle Eloqua Marketing Cloud Service Knowledge Zone and keep up on all the news! QUESTIONS? Contact [email protected] Please note that similar events will take place in other cities (Brussels, Istanbul and more come) later in 2014. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Understanding the SQL Server 2008 R2 Installation Center

    - by Enrique Lima
    What is available to us through those links?  Have you taken the time to explore and identify what could be useful to you? One of many gems that has come to my attention is the possibility of provisioning SQL Server to work in an image based environment (hint: Virtualization Template perhaps?!?).   Planning: Includes requirements information, documentation, how to guides, online documentation installation and other tools. Among the other tools you will find the System Configuration checker and The Upgrade Advisor. Both tools very important to ensure your deployment and installation would be successful.     Installation:  This sections focuses on getting installations going, from standalone to cluster when it comes to new instances.  Add new nodes to an existing cluster, and also perform upgrades (in this case to SQL Server 2008 R2).  Also part of this is the option to find updates available.   Maintenance: We find in this section, options that will assist us in tasks like repairing corrupt installations to removing nodes from a cluster. An option that is interesting (and we should discuss benefits later in another post) is to be able to do an Edition Upgrade, this is a feature expansion and addition based on your product installation (Developer to Enterprise, for example)   Tools:  From the System Configuration Checker to identify readiness for deployment in a successful manner, to being able to report on features installed.  And being able to run upgrades of existing packages developed in the 2005 offering to the 2008 R2 release for SSIS.   Resources: Useful and essential links to gather information and guidance.   Advanced: Here is where it gets interesting.  I break this down into 3 main groups: Installation Automation: When you install SQL Server there is a configuration file that gets dropped (ConfigurationFile.ini) that would allow for you to perform automated installations.  There are switches and options that go with this to have that process working. Cluster configuration for Sysprep: Create images that are cluster ready, 2 options, start the prep work, and then the complete once at the final destination. Stand-alone configuration for Sysprep:  Like the clustering counterpart, 2 options, prep and complete.  Giving you the option to create standard templates for your SQL Server deployments. I find it fitting that the 3 topics listed here should (and will) be additional topics I will discuss.   Options: Very clear and specific about what this means. Select the Processor Type or the Installation Media Root Path.

    Read the article

  • “It Isn’t Easy At All; Otherwise, Everyone Would Be Doing It”

    - by Kathryn Perry
    A few months ago, JP Saunders (pictured left), who leads the go-to-market initiatives for the Oracle CX Service offering, kicked off a series of articles about modern customer service. He contends that to take care of customers?and the people that support those customers?companies need to make it easy to deliver consistently great experiences. But it’s not easy; it’s an art. The six posts in The Art of Easy series will help you better understand some of the customer service challenges you face and how to avoid common pitfalls. We pulled them all together here in one post for continuity and easy access. Saunders introduces the series with The Art of Easy: Make It Easy To Deliver Great Customer Service Experiences (Part 1). The Art of Easy: Offer Self Service With the Emphasis on Service (Part 2) by David Fulton (pictured left): David Fulton, Director of Product Management, Oracle Service Cloud, shares five tenets of customer self service that move an organization closer to becoming a modern customer service business. Easy Decisions For Complex Problems (Part 3) by Heike Lorenz (pictured right): Heike Lorenz, Director of Global Product Marketing, Policy Automation, writes about automating service policies to ensure that the correct decisions are being applied to the right people. The goal is to nurture the trusted relationships with customers during complex decision-making processes. Moving at the Speed of Easy (Part 4) by Chris Ulmand (pictured left): Chris Omland, Director of Product Management, Oracle Service Cloud, addresses the need for speed to keep up with customers’ expectations. His advice—start with a platform that enables agile innovation, respects a company’s unique needs, and has proven reliability to protect customer relationships. Knowledge Makes It Easy For Everyone (Part 5) by Nav Chakravarti (pictured rig: Vice President Nav Chakravarti, Oracle Service Cloud, talks about managing the knowledge that customers need and want. He coaches readers on delivering answers to customers’ questions easily, in context, with relevance, reliably, and accurately. Making Easy, Both Effective and Efficient (Part 6) by Melinda Uhland (pictured left): Melinda Uhland, Oracle CX Product Management teaches us that happy agents produce happy customers. A Modern Customer Service organization is one that invests in its agents and empowers them with tools to make them efficient and effective, which, in turn, improves customer results.

    Read the article

  • Unable to list contents/remove directory (linux ext3)

    - by RedKrieg
    System is CentOS5 x86_64, completely up to date. I've got a folder that can't be listed (ls just hangs, eating memory until it is killed). The directory size is nearly 500k: root@server [/home/user/public_html/domain.com/wp-content/uploads/2010/03]# stat . File: `.' Size: 458752 Blocks: 904 IO Block: 4096 directory Device: 812h/2066d Inode: 44499071 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 3292/ user) Gid: ( 3287/ user) Access: 2012-06-29 17:31:47.000000000 -0400 Modify: 2012-10-23 14:41:58.000000000 -0400 Change: 2012-10-23 14:41:58.000000000 -0400 I can see the file names if I use ls -1f, but it just repeats the same 48 files ad infinitum, all of which have non-ascii characters somewhere in the file name: La-critic\363-al-servicio-la-privacidad-300x160.jpg When I try to access the files (say to copy them or remove them) I get messages like the following: lstat("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Sebast\355an-Pi\361era-el-balc\363n-150x120.jpg", 0x7fff364c52c0) = -1 ENOENT (No such file or directory) I tried altering the code found on this man page and modified the code to call unlink for each file. I get the same ENOENT error from the unlink call: unlink("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Marca-naci\363n-Madrid-150x120.jpg") = -1 ENOENT (No such file or directory) I also straced a "touch", grabbed the syscalls it makes and replicated them, then tried to unlink the resulting file by name. This works fine, but the folder still contains an entry by the same name after the operation completes and the program runs for an arbitrarily long time (strace output ended up at 20GB after 5 minutes and I stopped the process). I'm stumped on this one, I'd really prefer not to have to take this production machine (hundreds of customers) offline to fsck the filesystem, but I'm leaning toward that being the only option at this point. If anyone's had success using other methods for removing files (by inode number, I can get those with the getdents code) I'd love to hear them. (Yes, I've tried find . -inum <inode> -exec rm -fv {} \; and it still has the problem with unlink returning ENOENT) For those interested, here's the diff between that man page's code and mine. I didn't bother with error checking on mallocs, etc because I'm lazy and this is a one-off: root@server [~]# diff -u listdir-orig.c listdir.c --- listdir-orig.c 2012-10-23 15:10:02.000000000 -0400 +++ listdir.c 2012-10-23 14:59:47.000000000 -0400 @@ -6,6 +6,7 @@ #include <stdlib.h> #include <sys/stat.h> #include <sys/syscall.h> +#include <string.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) @@ -17,7 +18,7 @@ char d_name[]; }; -#define BUF_SIZE 1024 +#define BUF_SIZE 1024*1024*5 int main(int argc, char *argv[]) { @@ -26,11 +27,16 @@ struct linux_dirent *d; int bpos; char d_type; + int deleted; + int file_descriptor; fd = open(argc > 1 ? argv[1] : ".", O_RDONLY | O_DIRECTORY); if (fd == -1) handle_error("open"); + char* full_path; + char* fd_path; + for ( ; ; ) { nread = syscall(SYS_getdents, fd, buf, BUF_SIZE); if (nread == -1) @@ -55,7 +61,24 @@ printf("%4d %10lld %s\n", d->d_reclen, (long long) d->d_off, (char *) d->d_name); bpos += d->d_reclen; + if ( d_type == DT_REG ) + { + full_path = malloc(strlen((char *) d->d_name) + strlen(argv[1]) + 2); //One for the /, one for the \0 + strcpy(full_path, argv[1]); + strcat(full_path, (char *) d->d_name); + + //We're going to try to "touch" the file. + //file_descriptor = open(full_path, O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666); + //fd_path = malloc(32); //Lazy, only really needs 16 + //sprintf(fd_path, "/proc/self/fd/%d", file_descriptor); + //utimes(fd_path, NULL); + //close(file_descriptor); + deleted = unlink(full_path); + if ( deleted == -1 ) printf("Error unlinking file\n"); + break; //Break on first try + } } + break; //Break on first try } exit(EXIT_SUCCESS);

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >