Search Results

Search found 4919 results on 197 pages for 'membership provider'.

Page 149/197 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Congratulations to the 2012 Oracle Spatial Award Winners!

    - by Mandy Ho
    I just returned from the 2012 Location Intelligence and Oracle Spatial User conference in Washington, DC, held by Directions Magazine. It was a great conference with presentations from across the country and globe, networking with Oracle Spatial users and meeting new customers and partners. As part of the yearly event, Oracle recognizes special customers and partners for their contributions to advancing mainstream solutions using geospatial technology. This was the 8th year that Oracle has recognized innovative, industry leaders.   The awards were given in three categories: Education/Research, Innovator and Partnership. Here's a little on each of the award winners. Education and Research Award Winner: Technical University of Berlin The Institute for Geodesy and Geoinformation Science of the Technical University of Berlin (TU Berlin) was selected for its leading research work in mapping of urban and regional space onto virtual 3D-city and landscape models, and use of Oracle Spatial, including 3D Vector and Georaster type support, as the data management platform. Innovator Award Winner:  Istanbul Metropolitan Municipality Istanbul is the 3rd largest metropolitan area in Europe. One of their greatest challenges is organizing efficient public transportation for citizens and visitors. There are 15 types of transportations organized by 8 different agencies. To solve this problem, the Directorate of GIS of Istanbul Metropolitan Municipality has created a multi-model itinerary system to help citizens in their decision process for using public transport or their private cars. They choose to use Oracle Spatial Network Model as the solution in our system together with Java and SOAP web services.  Partnership Award Winners: CSoft Group and OSCARS. The Partnership award is given to the ISV or integrator who have demonstrated outstanding achievements in partnering with Oracle on the development side, in taking solutions to market.  CSoft Group- the largest Russion integrator and consultancy provider in CAD and GIS. CSoft was selected by the Oracle Spatial product development organization for the key role in delivering geospatial solutions based on Oracle Database and Fusion Middleware to the Russian market. OSCARS - Provides consulting/training in France, Belgium and Luxembourg. With only 3 full time staff, they have achieved significant success with leading edge customer implementations leveraging the latest Oracle Spatial/MapViewer technologies, and delivering training throughout Europe.  Finally, we also awarded two Special Recognition awards for two partners that helped contribute to the Oracle Partner Network Spatial Specialization. These two partners provided insight and technical expertise from a partner perspective to help launch the new certification program for Oracle Spatial Technologies. Award Winners: ThinkHuddle and OSCARS  For more pictures on the conference and the awards, visit our facebook page: http://www.facebook.com/OracleDatabase

    Read the article

  • What Would a CyberWar Do To Your Business?

    - by [email protected]
    In mid-February the Bipartisan Policy Center in the United States hosted Cyber ShockWave, a simulation of how the country might respond to a catastrophic cyber event. An attack takes place, they can't isolate where it came from or who did it, simulated press reports and market impacts...and the participants in the exercise have to brief the President and advise him/her on what to do. Last week, Former Department of Homeland Security Secretary Michael Chertoff who participated in the exercise summarized his findings in Federal Computer Weekly. The article, given FCW's readership and the topic is obviously focused on the public sector and US Federal policies. However, it touches on some broader issues that impact the private sector as well--which are applicable to any government and country/region-- such as: · How would the US (or any) government collaborate to identify and defeat such an attack? Chertoff calls this out as a current gap. How do the public and private sector collaborate today? How would the massive and disparate collection of agencies and companies act together in a crunch? · What would the impact on industries and global economies be? Chertoff, and a companion article in Government Computer News, only touch briefly on the subject--focusing on the impact on capital markets. "There's no question this has a disastrous impact on the economy," said Stephen Friedman, former director of the National Economic Council under President George W. Bush who played the role of treasury secretary. "You have financial markets shut down at this point, ordinary transactions are dramatically depleted, there's no question that this has a major impact on consumer confidence." That Got Me Thinking · How would it impact Oracle's customers? I know they have business continuity plans--is this one of their scenarios? What if it's not? How would it impact manufacturing lines, ATM networks, customer call centers... · How would it impact me and the companies I rely on? The supermarket down the street, my Internet Service Provider, the service station where I bought gas last night. I sure don't have any answers, and neither do Chertoff or the participants in the exercise. "I have to tell you that ... we are operating in a bit of unchartered territory." said Jamie Gorelick, a former deputy attorney general who played the role of attorney general in the exercise. But it is a good thing that governments and businesses are considering this scenario and doing what they can to prevent it from happening.

    Read the article

  • PASS: 2013 Summit Location

    - by Bill Graziano
    HQ recently posted a brief update on our search for a location for 2013.  It includes links to posts by four Board members and two community members. I’d like to add my thoughts to the mix and ask you a question.  But I can’t give you a real understanding without telling you some history first. So far we’ve had the Summit in Chicago, San Francisco, Orlando, Dallas, Denver and Seattle.  Each has a little different feel and distinct memories.  I enjoyed getting drinks by the pool in Orlando after the sessions ended.  I didn’t like that our location in Dallas was so far away from all the nightlife.  Denver was in downtown but we had real challenges with hotels.  I enjoyed the different locations.  I always enjoyed the announcement during the third keynote with the location of the next Summit. There are two big events that impacted my thinking on the Summit location.  The first was our transition to the new management company in early 2007.  The event that September in Denver was put on with a six month planning cycle by a brand new headquarters staff.  It wasn’t perfect but came off much better than I had dared to hope.  It also moved us out of the cookie cutter conferences that we used to do into a model where we have a lot more control.  I think you’ll all agree that the production values of our last few Summits have been fantastic.  That Summit also led to our changing relationship with Microsoft.  Microsoft holds two seats on the PASS Board.  All the PASS Board members face the same challenge: we all have full-time jobs and PASS comes in second place professionally (or sometimes further back).  Starting in 2008 we were assigned a liaison from Microsoft that had a much larger block of time to coordinate with us.  That changed everything between PASS and Microsoft.  Suddenly we were talking to product marketing, Microsoft PR, their event team, the Tech*Ed team, the education division, their user group team and their field sales team – locally and internationally.  We strengthened our relationship with CSS, SQLCAT and the engineering teams.  We had exposure at the executive level that we’d never had before.  And their level of participation at the Summit changed from under 100 people to 400-500 people.  I think those 400+ Microsoft employees have value at a conference on Microsoft SQL Server.  For the first time, Seattle had a real competitive advantage over other cities. I’m one that looked very hard at staying in Seattle for a long, long time.  I think those Microsoft engineers have value to our attendees.  I think the increased support that Microsoft can provide when we’re in Seattle has value to our attendees.  But that doesn’t tell the whole story.  There’s a significant (and vocal!) percentage of our membership that wants the Summit outside Seattle.  Post-2007 PASS doesn’t know what it’s like to have a Summit outside of Seattle.  I think until we have a Summit in another city we won’t really know the trade-offs. I think a model where we move every third or every other year is interesting.  But until we have another Summit outside Seattle and we can evaluate the logistics and how important it is to have depth and variety in our Microsoft participation we won’t really know. Another benefit that comes with a move is variety or diversity.  I learn more when I’m exposed to new things and new people.  I believe that moving the Summit will give a different set of people an opportunity to attend. Grant Fritchey writes “It seems that the board is leaning, extremely heavily, towards making it a permanent fixture in Seattle.”  I don’t believe that’s true.  I know there was discussion of that earlier but I don’t believe it’s true now. And that brings me to my question.  Do we announce the city now or do we wait until the 2012 Summit?  I’m happy to announce Seattle vs. not-Seattle as soon as we sign the contract.  But I’d like to leave the actual city announcement until the 2011 Summit.  I like the drama and mystery of it.  I also like that it doesn’t give you a reason to skip a Summit and wait for the next one if it’s closer or back in Seattle.  The other side of the coin is that your planning is easier if you know where it is.  What do you think?

    Read the article

  • SharePoint 2010 Hosting :: Sending SMS Alerts in SharePoint 2010 Over Office Mobile Service Protocol (OMS)

    - by mbridge
    In this post, I want to share the exciting news of SharePoint's 2010 new feature. Finally it's possible to send SMS directly from SharePoint to mobile phones. The advantages of sending SMS instead of Email messages are obvious: SMS alerts or reminders that are received on mobile phones are more preferred than Email messages that can be lost in the mass of spam. The interface is standard as it's very similar to previous versions of the product. Adjustments are easy to do, simply enter the address of the Office Mobile Service (OMS) web-service which you want to use for sending messages, then specify the connection parameters. Further details on Office Mobile Service is available below. The Test Service button checks if OMS web-service is accessible using provided URL (user name and password are not verified). This check is needed because OMS web-service URL depends on the mobile operator and country. It's now possible to select the method of sending alerts in alerts settings. Email option is selected by default. Alerts delivery method is displayed in the list of existing alerts. Office Mobile Service (OMS) SharePoint 2010 uses exterior servers similar to SMTP servers for sending SMS alerts. However, Microsoft started development and promotion of their own protocol instead of using existing ones. That is how Office Mobile Service (OMS) appeared. This open protocol enables clients to send text and multimedia messages (mobile messages) remotely to the server which processes these messages and delivers them to mobile phones.  Typical scenario of utilizing this protocol is data transfer between computer application and mobile phone. The recipient can answer messages and the server in return will deliver the answer by SMTP protocol, i.e. by email.  Key quality of this protocol is that it's built on base of HTPP(S) and SOAP protocols.     This means that in fact SMS gateway must support typified web-service. What do you get from web-service? What you get is the ability to send SMS from any platform you want.  The protocol is being developed at the moment and version 0.2 from 08/28/2009 was available when the article was published.  For promotion of their protocol and simplifying server search, Microsoft represented web-service http://messaging.office.microsoft.com/HostingProviders.aspx that helps to receive the list of providers, which supports OMS protocol and message delivery to your operator.  All you need to do is decide which provider to use, complete the agreement, then adjust the SharePoint connection parameters and start working.  Some providers advertise themselves not only for clients but for mobile operators as well. They offer automatic adding to the list of the Office Mobile Service Providers.  To view the full specifications of OMS, please go to http://msdn.microsoft.com/en-us/library/dd774103.aspx.

    Read the article

  • Day 1 - Finding Like Minds

    - by dapostolov
    So, is being a Game Developer any different from being an IT Developer? I picture a poorly lit environment where I get to purchase my own desk lamp; I'm thinking one of those huge lava lamps that pump out so much heat you could fry an egg on it. To my right: a "great wall" of empty coke cans dwarf me. Eating my last slice of pizza I look across my desk to see a fellow developer with a smug look on his face;  he's just coded his latest module for the game and it looks like he's in nirvana. My duty, of course, is to remind him to keep focused on the job at hand. So, picking up my trusty elastic and aerodynamically crafted paper bullet I begin a 10 minute war of welts and laughter which is promptly abrupted by our Project Manager demanding more details from our morning Scrum meeting. After providing about 5 minutes of geek speak and several words of comfort to make his eyes glaze over...it hits me, the idea for the module...beckoning my developer friend over, we quickly shoo the Project Manager away and begin our brainstorming frenzy ... now, where'd I put that full can of coke? OK. OK. This isn't probably the most ideal game developer environment, but it definitely sounds fun to me...and from what I gather is nothing like most game development companies. But I'm not doing this blog series to "go pro"; like I stated in my first post I want to make a 2D game from an idea my best friend and I drummed up long, long ago. I'm in this for the passion AND I want to see how easy it is for us .Net Developers to create a game. So where do I start? Where can I find like minded individuals? What technologies are there? What do I need to make a video game? The questions are endless....AND...since I already have an idea ... lets start with ... Technology (yes, I'm a geek, live with it...) Technology OK. Predominantly, games are still made in C++ or even C. I'm not sure how much assembly code is floating around lately, however, that is not my concern. I do know C / C++ from my past, enough to even get me by, but I'm mainly interested in a recent, not-so-new, technology called XNA. What is XNA? XNA allows us .Net Developers to make 2D / 3D games for windows, Xbox*, and Windows Mobile 7*. * = for a nominal fee *cough* The following link is your one stop shop to XNA game development: http://creators.xna.com/en-US/education/gettingstarted The above site hosts information such as: - getting started - a sample/instructional shooter game in 2D / 3D with code (if I'm taking too long for you in this blog series) - downloads - starter kits... http://creators.xna.com/en-US/education/starterkits/ And of course...forums. You can also subscribe and pay for their premium membership which gets you some pretty awesome tutorials, resources, downloads, and premium community support. Some general Wiki information about XNA: http://en.wikipedia.org/wiki/XNA_%28Microsoft%29 Community Support OK. Let's move on to industry and community support. Apart from XNA, there are some really cool sites out there, I just haven't found all of them yet. However, I found a really cool Game Development website called Gamastura. You can click on the following link to get you there: http://www.gamasutra.com/ The site is 100% dedicated to "The Art & Business of Making Games". Armed with blogs, twitter, jobs/resumes and most importantly industry news; one could subscribe to the feed and got lost in the wealth of information it provides. On a side note: I remember Gamasutra being around when my best friend and I wanted to make a video game...meaning, they've been around for a while now. I think the most beneficial aspect of this site is to understand the industry you want to get into. Otherwise, it's just a cool site to keep up to date with the industry in general. Another Community Support option is LinkedIn. Amongst the land of extremely bloated achievements and responsibilities lay 3 groups (that I have found) that deal with game development.: http://www.linkedin.com/groups?gid=59205 - Game Developers http://www.linkedin.com/groups?gid=824817 - DirectX Game Developer Network http://www.linkedin.com/groups?gid=756587 - DirectX Developers The Game Developers group in LinkedIn is by far the most active of the three and could possibly provide a wealth of support. What I've done thus far: - I lightly researched the XNA technology - I looked around for some community sites to assist me - I downloaded the XNA Game Studio 3.1 on my PC and installed it on my IDE - I even tried both tutorials! http://creators.xna.com/en-US/education/gettingstarted/bgintro/chapter1   Best Regards D.

    Read the article

  • MySQL Connector/Net 6.6.3 Beta 2 has been released

    - by fernando
    MySQL Connector/Net 6.6.3, a new version of the all-managed .NET driver for MySQL has been released.  This is the second of two beta releases intended to introduce users to the new features in the release. This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense&   the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions&   triggers. These release contains fixes specific of the debugger as well as other fixes specific of other areas of Connector/NET:   * Added feature to define initial values for InOut stored procedure arguments.   * Debugger: Fixed Visual Studio locked connection after debugging a routine.   * Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202).   * Fix for bug "CacheServerProperties can cause 'Packet too large' error". MySQL Bug #66578 Orabug #14593547.   * Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549).   * Fixed end of line issue when debugging a routine.   * Added validation to avoid overwriting a routine backup file when it hasn't changed.   * Fixed inheritance on Entity Framework Code First scenarios. (MySql bug #63920 and Oracle bug #13582335).   * Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048).   * Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292).   * Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960).   * Fixed "Decimal type should have digits at right of decimal point", now default is 2, and user's changes in     EDM designer are recognized (MySql bug #65127, Oracle bug #14474342).   * Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715).   * Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338).   * Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204).   * Added ANTLR attribution notice (Oracle bug #14379162).   * Fix for debugger failing when having a routine with an if-elseif-else.   * Also the programming interface for authentication plugins has been redefined. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • ApiChange Corporate Edition

    - by Alois Kraus
    In my inital announcement I could only cover a small subset what ApiChange can do for you. Lets look at how ApiChange can help you to fix bugs due to wrong usage of an Api within a fraction of time than it would take normally. It happens that software is tested and some bugs show up. One bug could be …. : We get way too man log messages during our test run. Now you have the task to find the most frequent messages and eliminate the Log calls from the source code. But what about the myriads other log calls? How can we check that the distribution of log calls is nearly equal across all developers? And if not how can we contact the developer to check his code? ApiChange can help you too connect these loose ends. It combines several information silos into one cohesive view. The picture below shows how it is able to fill the gaps. The public version does currently “only” parse the binaries and pdbs to give you for a –whousesmethod query the following colums: If it happens that you have Rational ClearCase (a source control system) in your development shop and an Active Directory in place then ApiChange will try to determine from the source file which was determined from the pdb the last check in user which should be present in your Active Directory. From there it is only a small hop to an LDAP query to your AD domain or the GC (Global Catalog) to get from the user name his Full name Email Phone number Department …. ApiChange will append this additional data all of your query results which contain source files if you add the –fileinfo option. As I said this is currently not enabled by default since the AD domain needs to be configured which are currently only some hard coded values in the SiteConstants.cs source file of ApiChange.Api.dll. Once you got this data you can generate metrics based on source file, developer, assembly, … and add additional data by drag and drop directly into the pivot tables inside Excel. This allows you to e.g. to generate a report which lists the source files with most log calls in descending order along with the developer name and email in the pivot table. Armed with this knowledge you can take meaningful measures e.g. to ask the developer if the huge number of log calls in this source file can be optimized. I am aware that this is a very specific scenario but it is a huge time saver when you are able to fill the missing gaps of information. ApiChange does this in an extensible way. namespace ApiChange.ExternalData {     public interface IFileInformationProvider     {         UserInfo GetInformationFromFile(string fileName);     } } It defines an interface where you can implement your custom information provider to close the gap between source control system and the real person I have to send an email to ask if his code needs a closer inspection.

    Read the article

  • TomEE Integration in NetBeans Next

    - by Geertjan
    At JavaOne 2013, there was a lot of buzz around the TomEE server, e.g., many Tweets, nice party, and a new TomEE consulting company. For those tracking TomEE developments, it is interesting to note that recently the NetBeans IDE development builds have had added to them... TomEE support. Note: The TomEE support described here is not in NetBeans IDE 7.4, but in development builds for the next release of NetBeans IDE.For example, with NetBeans IDE development builds you're able to: register TomEE as a server in the Services window (TomEE has several distributions, e.g., one can use the "with JAX-RS" one, for example) create a Java EE 6 web project (e.g., Maven based) against this server create JPA entities from database create JAX-RS classes from JPA entities create JSF pages from JPA entities the IDE lets you create a new data source for TomEE and deploy it to the server the IDE figures out the components that are already packaged in TomEE, and the fact that (unlike with regular Tomcat), it does not need to package any components such as JSF implementation, persistence provider, or JAX-RS runtime, so that the resulting WAR file is very small the IDE can also do "deploy on save" with TomEE, so that your development cycle is very fast Adam Bien blogged about how he set up TomEE sometime ago, here. The official support in NetBeans IDE will be much more tightly integrated, simplifying the steps Adam describes. For example, the IDE does step 2 from Adam's blog for you, i.e., it sets up TomEE deployment roles. Moreover, it knows about all the technologies included in TomEE so that it can optimize the packaging; it knows about TomEE's persistence setup; it can work with TomEE data sources, etc. Below you see a Maven-based Java EE 6 PrimeFaces application (all entities and JSF pages generated from a database) deployed to TomEE in NetBeans IDE: And here's the management console for configuring and finetuning TomEE in NetBeans IDE: When I tried out the NetBeans IDE development build and TomEE, to see how everything fits together, I was surprised at how fast TomEE started up. Not sure what they did to it, but seems like a server on steroids. And setting it up in NetBeans IDE was trivial. Add the simple set up of TomEE in NetBeans IDE to the many benefits that the widely praised out of the box NetBeans Maven tools make possible, together with the fact that not one single plugin had to be installed to get everything you see described here up and running... and you have a really powerful combination of dev tools, all for free.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-14

    - by Bob Rhubart
    Duke's Choice Award Nominations Close Friday! | The Java Source The Duke's Choice Awards celebrate extreme innovation in the world of Java technology. Nominate an individual, a group or company who show the best in Java innovation. Nominate at Java.net/dukeschoice. Nominations are open until this Friday, June 15. Whole Lotta Virtualization Goin' On | Rick Ramsey The OTN Garage's Rick Ramsey shares a list of recent Virtualization articles available on OTN, along with a link to a video by The Killer, Mr Jerry Lee Lewis. A Pragmatic Path to Navigating your Infrastructure to the Cloud | The WebLogic Server Blog Ruma Sanyal offers an overview of a recent Oracle webcast featuring Gartner VP and Distinguished Analyst Andy Butler and Vice President and Gartner Fellow Massimo Pezzini. Migrating C/C++ embedded SQL code | Tom Laszewski Cloud migration expert Tom Laszewski explains the how-to in 5 easy steps. Aetna Dumps Its Siloed Enterprise Architecture for SOA | CIO.com CIO writer Stephanie Overby tells the story of how one major health insurance provider put the "Enterprise" back in Enterprise Architecture. (H/T to Joe McKendrick for this story.) Downloading specific video renditions in WebCenter Content | Kyle Hatlestad How-to from Oracle WebCenter & ADF A-Team blogger Kyle Hatlestad. Eclipse DemoCamp - June 2012 - Redwood Shores, CA Location: Oracle HQ - 10 Twin Dolphin Drive, Redwood Shores, CA (Map) Date and Time: Wednesday, June 13, 2012. From 6pm - 9pm Agenda: The evolution of Java persistence, Doug Clarke, EclipseLink Project Lead, Oracle Integrating BIRT into Applications, Ashwini Verma, Actuate Corporation Leveraging OSGi In The Enterprise, Kamal Muralidharan, Lead Engineer, eBay Developing Rich ADF Applications with Java EE, Greg Stachnick, Oracle NVIDIA® NsightTM Eclipse Edition, Goodwin (Tech lead - Visual tools), Eugene Ostroukhov (Senior engineer – Visual tools) 2012 Oracle Fusion Middleware Innovation Awards - Win a FREE Pass to Oracle OpenWorld 2012 in San Francisco Share your use of Oracle Fusion Middleware solutions and how they help your organization drive business innovation. You just might win a free pass to Oracle Openworld 2012 in San Francisco. Deadline for submissions in July 17, 2012. BI Architecture Master Class for Partners – Oracle Architecture Unplugged Date: June 21, 2012 No slides, no fluff. This workshop will be highly interactive and is aimed at Oracle OPN member partners who are IT Architects and BI+W specialists. The focus will be on architectural issues and considerations. DevOps: Evolving to Handle Disruption | JP Morgenthal The subject of DevOps came up this week during an OTN ArchBeat podcast interview with Ron Batra and James Baty on the role of the cloud architect (that program will be available in a few weeks). Morgenthal's article for InfoQ offers a good overview of what DevOps is and how it works. Thought for the Day "Elegance is not a dispensable luxury but a factor that decides between success and failure." — Edsger Dijkstra Source: softwarequotes.com

    Read the article

  • SharePoint 2010, Cloud, and the Constitution

    - by Michael Van Cleave
    The other evening an article on the Red Tap Chronicles caught my eye. The article written by Bob Sullivan titled "The Constitutional Issues of Cloud Computing" was very interesting in regards to the direction most of the technical world is going. We all have been inundated about utilizing cloud computing for reasons of price, availability, or even scalability; but what Bob brings up is a whole separate view of why a business might not want to move toward the cloud for services or applications. The overall point to the article was pretty simple. It all boiled down to the summation that hosting "Things" in the cloud (Email, Documents, etc…) are interpreted differently under the law regarding constitutional search and seizure than say a document or item that is kept in physical form at a business or home. Where if you physically have it stored someone would have to get a warrant to search for it or seize it, but if it is stored off in the cloud and the ISV or provider is subpoenaed for the item then they will usually give access to the information. Obviously this is a big difference in interpretation of the law and the constitution due to technology. So you might ask "Where does this fit in with SharePoint? Well the overall push for this next version of SharePoint is one that gives a business ultimate flexibility to utilize the Cloud. In one example this upcoming version gracefully lends itself to Multi Tenancy so that online or "Cloud" hosting would be possible by Service Providers. Another aspect to the upcoming version is that it has updated its ability to store content outside of the database and in a cheaper commoditized storage facility. This is called Remote Blob Storage (or RBS) which is the next evolution of External Blob Storage (or EBS). With this new functionality that business might look forward to it is extremely important for them to understand that they might be opening themselves up to laws that do not need a warrant to search or seize their information that is stored in the cloud. It will be interesting to see how this all plays out in the next few months. Usually the laws change slowly in comparison to technology so it might be a while until we see if it is actually constitutional to treat someone's content on the cloud differently as it would be in their possession, however until there is some type of parity that happens or more concrete laws regarding the differences be very careful about what you put in the cloud. Michael

    Read the article

  • Partner Training on Endeca 2-Days Hands-on Fundamentals

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Utrecht, NL - Monday, January 28 until Tuesday, January 29 : To Register Click here   cost €475 per person Utrecht, NL - Thursday, January 31 until Friday, February 1 :  To Register Click here   cost €475 per person Oracle Belgium - Wednesday, February 6 to Thursday February 7: To Register Click here   cost €535 per person Oracle Belgium - Thursday, February 28 until Friday, March 1 :  To Register Click here   cost €535 per person The Oracle Endeca Information Discovery (OEID) fundamentals training is designed to give partners an understanding of OEID’s features, and how it complements the existing Oracle Business Intelligence suite. Participants will learn how to develop & implement solutions using a Data Discovery method.  Training is in Dutch. This is a two-day class which start with the introduction of Endeca in the proposition of Oracle Business Analytics. The underlying architecture and technology will also be covered. The majority of this fundamentals training is based on a hands-on wokrshop. In this workshop all participants will build several Endeca dashboards based on worked out examples. During this workshop we will also spend time on how to extract social media and other unstructured data combined with text enrichment. This training is developed and will be given by Aotrta Business Intelligence who is Oracle Approved Education Center for OBIEE and OEID in EMEA. Prerequisites You must bring a 64-bit laptop with you for the Hands-on labs: Attendees should have experience and familiarity with the basic concepts of business intelligence and be OPN Partners with Gold or above membership.

    Read the article

  • Windows Phone 7 Development Updates &ndash; March 8th 2011

    - by Nikita Polyakov
    Here are the latest update from the Windows Phone 7 Developer Worlds that went live this month. Here are some of the latest numbers: Windows Phone Marketplace currently offers more than 9,000 quality apps and games and enjoys a base of over 32,000 registered developers, delivering an average of 100 new apps every day. There have been over 1 million downloads of the developers tools for Windows Phone 7. Trial version help you sell more Trials result in higher sales by the numbers: Users like trials  - paid apps with trial functionality are downloaded 70 times more than paid apps that don’t Nearly 1 out of 10 trial apps downloaded convert to a purchase and generate 10 times more revenue on average than paid apps that don’t include trial functionality. Trial downloads convert to paid downloads quickly. More than half of trial downloads that convert to a sale do so within the 1st 24 hours of trial download, and mostly within 2 hours of trial download. Microsoft Ad Control is gaining traction By the numbers - ad supported Windows Phone 7 apps are: Roughly ¼ of all registered U.S. WP7 developers have downloaded the free Ad SDK for Silverlight and XNA Of ad funded apps, over 95 percent use the free Microsoft Advertising Ad Control Monthly impressions from our Ad Exchange has continued to grow by double digits – impressions increased by 376 percent since January Ad Control, the first wave of “How Do I” videos are now available on MSDN: Create an Ad in a Windows Phone 7 XNA Game App Register Ad-Enabled Windows Phone 7 Apps Measure Ad Performance of Windows Phone 7 Apps Boarder International App submission for Free Apps through Yalla Apps As of today you can start submitting your free applications in developer markets that are currently not covered by Microsoft. To submit your Free application if you DO NOT belong to one of the Marketplace supported countries, go to: Yalla Apps Marketplace Policy Updates: Free App Marketplace Submission upped to 100 and other news Microsoft has been revisiting a few of our Marketplace policies based on feedback from developers to reduce friction and cost, word for word: 1. We have raised the limit on the number of certifications that can be performed for FREE apps at no cost to the registered developer from five to 100. This was a common request from developers which we are glad to implement after building alternate methods to ensure that users can find and download high quality apps. 2. We have converted policy 5.6 - related to the inclusion of contact information for support - from a mandatory to an optional policy. This is still a strongly recommended best practice, but we recognized and responded to developer feedback that this policy was creating excessive drag on the certification process for developers without commensurate user benefit for all apps. 3. We also understand the desire for clarification with regard to our policy on applications distributed under open source licenses.  The Marketplace Application Provider Agreement (APA) already permits applications under the BSD, MIT, Apache Software License 2.0 and Microsoft Public License.  We plan to update the APA shortly to clarify that we also permit applications under the Eclipse Public License, the Mozilla Public License and other, similar licenses and we continue to explore the possibility of accommodating additional OSS licenses. Enjoy and happy coding! Official Blog Post for reference.

    Read the article

  • Visual WebGui's XAML based programming for web developers

    - by Webgui
    While ASP.NET provides an event base approach it is completely dismissed when working with AJAX and the richness of the server is lost and replaced with JavaScript programming and couple with a very high security risk. Visual WebGui reinstates the power of the server to AJAX development and provides a statefull yet scalable, server centric architecture that provides the benefits and user productivity of AJAX with the security and developer productivity we had before AJAX stormed into our lives. "When I first came up with the concept of Visual WebGui , I was frustrated by the fragile and complex nature of developing web applications. The contrast in productivity between working in a fully OOP compiled environment vs. scripting even today, with JQuery, Dojo and such, is still huge. Even today the greatest sponsor of JavaScript programming, Google, is offering a framework to avoid JavaScript using Java that compiles to JavaScript (GWT). So I decided to find a way to abstract the complexity or rather delegate the complex job to enable developers to concentrate on the “What” instead of the “How” and embraced the Form based approach," said Guy Peled the inventor of Visual WebGui. Although traditional OOP development still rules the enterprise, the differences between web sites and web applications have blurred and so did the differences between classic developers and web developers. As a result, we now see declarative languages in desktop / backend development environments (WPF / WF) and we see OOP, gaining more and more power in web development (ASP.NET MVC / ASP.NET DOM). However, what has not changed is enterprise need for security, development ROI, reach, highly responsive and interactive UIs and scalability. The advantages that declarative languages and 'on demand' compilation provide over classic development are mostly the flexibility and a more readable initialize component it offers which is what Gizmox is aspiring to do by replacing the designer initialize component with XAML code. The code in this new project template will be compiled on demand using the build provider mechanism ASP.NET has. This means that the performance hit is only on the first request and after that the performance is the same as a prebuilt solution. This will allow the flexibility of a dynamically updated sites and the power of fully blown enterprise applications over web. You can also use prebuilt features available in ASP.NET to enjoy both worlds in production. VWG XAML implementation (VWG Sites) will be the first truly compliable XAML implementation as Microsoft implemented Silverlight and WPF as a runtime markup interpretation opposed to the ASP.NET markup implementation which is compiled to CLR code once. We have chosen to implement the VWG Sites parser as a different way to create CLR code that provides greater performance over the reflection alternative. VWG Sites will also be the first server side XAML UI engine which, while giving the power of XAML, it will not require any plug-ins or installations on the client side. Short demo video of VWG Sites markup. There is also a live sample available here.

    Read the article

  • Oracle CRM For Public Sector, Commercial Business, Education

    - by michael.seback
    Chongqing Transport Commission Improves Management of Transport Projects The Chongqing Transport Commission is responsible for public passenger, road, and waterway transport in urban and rural areas of Chongqing. The commission administers the region's road and water industry; oversees the construction of transport infrastructure; and manages civil aviation, railroads, roads, waterways, ports, and wharves. "After studying the IT initiatives of other provincial transport commissions, we decided to use Siebel Public Sector to build our integrated transport service system. The Siebel software offers powerful functions that allow us to integrate information and improve the management of our road, rail, and waterway infrastructure projects." - Chen Xiaoming, Vice Director, Information Center, Chongqing Transport Commission. Read more here. Siemens Information Services Increases Productivity by 20% Siemens Information Services Pvt, Ltd. provides back-office account processing services to Siemens' vendors. The company works with Siemens' healthcare, energy, and industry divisions in Europe, the United States, and parts of the Asia-Pacific region. It approves financial services such as processing payroll, accounts data, purchase orders, invoices, and payments, and also creates service catalogs for customers and internal teams. "Oracle CRM ON Demand provides us with a complete view of each customer's data from the moment they log a request to the time we close it. This has eliminated manual requests, and improved the service we offer to our clients across the Asia-Pacific region." -Sunil Zutshi, General Manager, IT, Siemens Information Services Pvt, Ltd. Read more here. China Distance Education Holdings Improves Call Center Productivity by 24% China Distance Education Holdings Limited is a leading provider of online education. The organization offers 174 courses through 16 Web sites, including accounting, healthcare, law, and engineering. In 2010, 215,000 students were enrolled. "Online education is a fast growing sector in China. To maintain our competitiveness, we implemented Oracle Contact Center Anywhere to make it easier and faster for our call center staff to respond to student enquiries. As a result, their productivity increased by 24%." - Qin Songjiang, Chief Technology Officer, China Distance Education Holdings Limited. Read more here.

    Read the article

  • Highlights From Interact '12 - Healthcare Industry User Group

    - by John Webb
    Last week the Oracle team traveled to Orlando for the 18th annual Healthcare Industry User Group (HIUG) conference, Interact '12.   HIUG has over 3,000 members representing 180 organizations.  While we now know the result on the SCOTUS ruling yesterday, the consensus at the conference last week was summed up well in the welcome note from HIUG President, Chris Ryzewski:    "Regardless of the legal ruling on this administration's  Patient Protection and Affordable Care Act we will undoubtedly be called upon to further reduce costs and be more efficient in every aspect of our business processes."    Well put!   Attendance exceeded previous years with several hundred attendees, over 100 sessions, and a trade show that numbered 40 booths.    Most of the HIUG members use PeopleSoft applications and they tend to be full suite customers who use PeopleSoft broadly from HCM to Financials and Supply Chain. For many customers who have licensed PeopleSoft in the last year, it was their first experience with a very strong and collaborative user group.   I had dinner with a provider who is rolling out PeopleSoft HCM and ERP to a nationwide system of forty hospitals.  A key driver for this organization and others is how to leverage PeopleSoft applications to meet the cost reduction goals mentioned above.   In the area of procurement, the topic of Supplier Contract Management attracted a lot of attention.  Contract pricing and adherence to contracts throughout the procure to pay life cycle are key to meeting cost containment objectives.  Customers were excited to see the new faceted search capabilities and usability of  the upcoming PeopleSoft eProcurement release.     The new Work Center concept was discussed in several areas including the Cost Reconciliation Work Center and the Supply Demand Work Center which enables healthcare specific functions around PAR counts and related replenishment activities.  The latest Feature Pack of HCM 9.1 was demonstrated with the Talent Summary and Manager Dashboard.   Customers were excited to see the major advances in self service available today.    The Grants Special Interest Group focused quite a bit on the usage of PeopleSoft's Project Costing "Funds Distribution" feature, which can be used to manage capital projects funded by multiple agencies and sources.  Along with the latest release of the Mobile Inventory solution that several hospitals have now implemented, a preview of new mobile applications for expenses and approvals drew a lot of attention.   The PeopleSoft focus on assisting these companies in their goals to contain costs and create new efficiencies continues forward.   We look foward to Interact '13!     

    Read the article

  • Wired and Wireless Network Issues with PPPoE

    - by user9054
    down vote favorite Hi Friends, I have got this issue with Ubuntu 10.10. I have been with ubuntu 8.04 and then decided to try out ubuntu 10.10 . I booted with a LiveCD and was able to configure the wireless network painlessly using the livecd. So happily i installed ubuntu 10.10. As soon as ubuntu came up it detected the wireless network and i was able to assign a static IP to eth1 (i dont use DHCP option on my ADSL router) and enter a wap key and use pppoeconf to configure the dialer. The net was on and i was able to surf the net. All hunky dory so far. However on the next boot the fun started. It did not detect the wireless network. I could not see the network manager icon in the systray. I used ifconfig and saw that the entry for eth1 was missing. I used ifup eth1 and it said that eth1 was already up . Then i installed wifi-radar. Wifi-Radar detected the wireless network. I configured wifi-radar for the detected wireless network , set the wap driver as wext and used the manual IP settings. However on clicking connect wifi-radar started looking for a DHCP IP , needless to say it failed. For the love of god i cannot understand why wifi-radar is using DHCP when i have specified manual settings . Next i decided to use the wired network to surf the net looking for a solution . So i plugged in the network cable from my modem , it detected the plugged in connection , i configured eth0 , used pppoeconf and connected to the net. Then i foolishly decided to reboot my PC. And wonders of wonders , the same problem appeared. I cannot see eth0 in my ifconfig anymore. I used pon to start the dsl-provider connection and it said something about network error or something . Now my ifconfig shows only lo , both eth0 and eth1 have disappeared. Can anybody help me on this ? Is it a problem with ipv6 , if so how do you disable ipv6 on ubuntu 10.10 ? OR is this is a known issue with ubuntu 10.10 ? PS : 1) i tried linux mint 10 and had the same issue. On rebooting wireless network was not getting detected . 2) i have made myself the administrator so that there is no issue of rights or anything. Any help is appreciated.

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 1: Overview

    - by Your DisplayName here!
    A lot has been written already about passive federation and integration of WIF and ADFS 2 into web apps. The whole active/WS-Trust feature area is much less documented or covered in articles and blogs. Over the next few posts I will try to compile all relevant information about the above topics – but let’s start with an overview. ADFS 2 has a number of endpoints under the /services/trust base address that implement the WS-Trust protocol. They are grouped by the WS-Trust version they support (/13 and /2005), the client credential type (/windows*, /username*, /certificate*) and the security mode (*transport, *mixed and message). You can see the endpoints in the MMC console under the Service/Endpoints page. So in other words, you use one of these endpoints (which exactly depends on your configuration / system setup) to request tokens from ADFS 2. The bindings behind the endpoints are more or less standard WCF bindings, but with SecureConversation (establishSecurityContext) disabled. That means that whenever you need to programmatically talk to these endpoints – you can (easily) create client bindings that are compatible. Another option is to use the special bindings that come with WIF (in the Microsoft.IdentityModel.Protocols.WSTrust.Bindings namespace). They are already pre-configured to be compatible with the ADFS endpoints. The downside of these bindings is, that you can’t use them in configuration. That’s definitely a feature request of mine for the next version of WIF. The next important piece of information is the so called Federation Service Identifier. This is the value that you (at least by default) have to use as a realm/appliesTo whenever you are requesting a token for ADFS (e.g. in  IdP –> RSTS scenario). Or (even more) technically speaking, ADFS 2 checks for this value in the audience URI restriction in SAML tokens. You can get to this value by clicking the “Edit Federation Service Properties” in the MMC when the Service tree-node is selected. OK – I will come back to this basic information in the following posts. Basically I want to go through the following scenarios: ADFS in the IdP role ADFS in the R-STS role (with a chained claims provider) Using the WCF bindings for automatic token issuance Using WSTrustChannelFactory for manual token handling Stay tuned…

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • How to troubleshoot ethernet port on laptop

    - by Psallas Vassilios
    I have a problem with my wired connection. To be more specific, my laptop doesn't seems to recognize that I have plugged in an ethernet cable. I tried to download new drivers for my ethernet card, but I couldn't find any solutions. Maybe because I am new to Linux, so I'm not familiar with running commands in the terminal. OK I have typed the command and here are the results: 00:04.0 Ethernet controller [0200]: Silicon Integrated Systems [SiS] 191 Gigabit Ethernet Adapter [1039:0191] (rev 02) For the second reply I don't know if the following is what you asked me: ?Memory: 3.9 GiB ?Processor: Intel Core 2 Duo CPU P8800 @ 2.66GHz × 2 ?OS type: 32-bit My Ethernet connection had some problem on Windows too. I have changed recently my internet provider, and since then my ethernet cable is not recognized by the laptop. At that time I was still on Windows. I thought that with Ubuntu the problem would be solved, but unfortunately the problem still persists. If someone can help me to solve my problem I'll be thankful. Here are the results of the three first commands you told me to run: lsmod | grep sis190 sis190 22570 0 sudo modprobe sis190 ifconfig eth0 Link encap:Ethernet HWaddr 00:90:f5:90:81:7e UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:185 errors:0 dropped:0 overruns:0 frame:0 TX packets:185 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:22672 (22.6 KB) TX bytes:22672 (22.6 KB) wlan0 Link encap:Ethernet HWaddr 00:25:d3:2c:3a:ae inet addr:192.168.1.72 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::225:d3ff:fe2c:3aae/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:260 errors:0 dropped:0 overruns:0 frame:0 TX packets:363 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:71992 (71.9 KB) TX bytes:52000 (52.0 KB) and the results of running the last two commands: dmesg | grep -e eth -e sis190 [ 0.816667] sis190: sis190 Gigabit Ethernet driver 1.4 loaded [ 0.816728] sis190 0000:00:04.0: setting latency timer to 64 [ 0.816751] sis190: 0000:00:04.0: Read MAC address from EEPROM [ 0.904032] sis190: 0000:00:04.0: Realtek PHY RTL8201 transceiver at address [ 1.416030] sis190: 0000:00:04.0: Using transceiver at address 1 as default [ 1.448235] sis190 0000:00:04.0: eth0: 0000:00:04.0: SiS 191 PCI Gigabit Ethernet adapter at f8410000 (IRQ: 19), 00:90:f5:90:81:7e [ 1.448238] sis190 0000:00:04.0: eth0: GMII mode. [ 1.448243] sis190 0000:00:04.0: eth0: Enabling Auto-negotiation [ 11.560907] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 16.372019] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 16.372265] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 26.424038] sis190 0000:00:04.0: eth0: auto-negotiating... nm-tool NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: sis190 State: unavailable Default: no HW Address: 00:90:F5:90:81:7E Capabilities: Carrier Detect: yes Wired Properties Carrier: off

    Read the article

  • Are there any concerns with using a static read-only unit of work so that it behaves like a cache?

    - by Rowan Freeman
    Related question: How do I cache data that rarely changes? I'm making an ASP.NET MVC4 application. On every request the security details about the user will need to be checked with the area/controller/action that they are accessing to see if they are allowed to view it. The security information is stored in the database. For example: User Permission UserPermission Action ActionPermission A "Permission" is a token that is applied to an MVC action to indicate that the token is required in order to access the action. Once a user is given the permission (via the UserPermission table) then they have the token and can therefore access the action. I've been looking in to how to cache this data (since it rarely changes) so that I'm only querying in-memory data and not hitting a database (which is a considerable performance hit at the moment). I've tried storing things in lists, using a caching provider but I either run in to problems or performance doesn't improve. One problem that I constantly run in to is that I'm using lazy loading and dynamic proxies with EntityFramework. This means that even if I ToList() everything and store them somewhere static, the relationships are never populated. For example, User.Permissions is an ICollection but it's always null. I don't want to Include() everything because I'm trying to keep things simple and generic (and easy to modify). One thing I know is that an EntityFramework DbContext is a unit of work that acts with 1st-level caching. That is, for the duration of the unit of work, everything that is accessed is cached in memory. I want to create a read-only DbContext that will exist indefinitely and will only be used to read about permission data. Upon testing this it worked perfectly; my page load times went from 200ms+ to 20ms. I can easily force the data to refresh at certain intervals or simply leave it to refresh when the application pool is recycled. Basically it will behave like a cache. Note that the rest of the application will interact with other contexts that exist per request as normal. Is there any disadvantage to this approach? Could I be doing something different?

    Read the article

  • What to answer to a customer who asks which one of two equivalent technologies must be used?

    - by MainMa
    As a freelancer, I am often asked by my customers what they must choose between similar elements, neither of which being better than another. Examples: “Do I need my e-commerce website be in PHP or ASP.NET?” “Do I need to host this ordinary web service in Cloud or use an ordinary hosting service?” “Which one is better for my new website: MySQL or Oracle?” etc. There is maybe at most 1% of cases where the choice is relevant, and there is a real, objective reason to use one over another, based on the precise metrics and studies. In all other cases, it doesn't matter at all. It is totally, completely irrelevant, either because there are no implications¹, or because those implications are too small to be taken in account², or, finally, because it's impossible to predict those implications³. If you know one thing and not another one, the answer to those questions is easy: “You can either write the application in C# or Java, both being probably equivalent in your case. Note that I'm a C# developer, so if you choose Java, I would not be able to work on your project and you would need to find another freelancer.” When you know both technologies, you can't answer that. In this case, how to explain to the customer that the question he asks is subject to flamewar and has no real consequences on his project? In other words, how to explain that you've chosen to use one technology rather than an equivalent one for the reasons related to human resources, without giving the impression to be unprofessional or to not care about the project? ¹ Example: Is MySQL better (worse?), performance-wise, compared to Oracle, for a personal website which will be accessed by, oh, let's be optimistic, two people per day? ² Example: for a given project, I was asked to asset if Windows Azure hosting would be cheaper than the hosting of the same application on a well-known ASP.NET hosting provider. The cost revealed to be exactly the same. ³ Example: your customer have an idea of a future application (the idea itself being extremely vague). There is no business plan, no requirements, nothing at all. Just an idea. You are asked if Java is better than C# for this app. What do you answer?

    Read the article

  • As the current draft stands, what is the most significant change the "National Strategy for Trusted Identities in Cyberspace" will provoke?

    - by mfg
    A current draft of the "National Strategy for Trusted Identities in Cyberspace" has been posted by the Department of Homeland Security. This question is not asking about privacy or constitutionality, but about how this act will impact developers' business models and development strategies. When the post was made I was reminded of Jeff's November blog post regarding an internet driver's license. Whether that is a perfect model or not, both approaches are attempting to handle a shared problem (of both developers and end users): How do we establish an online identity? The question I ask here is, with respect to the various burdens that would be imposed on developers and users, what are some of the major, foreseeable implementation issues that will arise from the current U.S. Government's proposed solution? For a quick primer on the setup, jump to page 12 for infrastructure components, here are two stand-outs: An Identity Provider (IDP) is responsible for the processes associated with enrolling a subject, and establishing and maintaining the digital identity associated with an individual or NPE. These processes include identity vetting and proofing, as well as revocation, suspension, and recovery of the digital identity. The IDP is responsible for issuing a credential, the information object or device used during a transaction to provide evidence of the subject’s identity; it may also provide linkage to authority, roles, rights, privileges, and other attributes. The credential can be stored on an identity medium, which is a device or object (physical or virtual) used for storing one or more credentials, claims, or attributes related to a subject. Identity media are widely available in many formats, such as smart cards, security chips embedded in PCs, cell phones, software based certificates, and USB devices. Selection of the appropriate credential is implementation specific and dependent on the risk tolerance of the participating entities. Here are the first considered actionable components of the draft: Action 1: Designate a Federal Agency to Lead the Public/Private Sector Efforts Associated with Achieving the Goals of the Strategy Action 2: Develop a Shared, Comprehensive Public/Private Sector Implementation Plan Action 3:Accelerate the Expansion of Federal Services, Pilots, and Policies that Align with the Identity Ecosystem Action 4:Work Among the Public/Private Sectors to Implement Enhanced Privacy Protections Action 5:Coordinate the Development and Refinement of Risk Models and Interoperability Standards Action 6: Address the Liability Concerns of Service Providers and Individuals Action 7: Perform Outreach and Awareness Across all Stakeholders Action 8: Continue Collaborating in International Efforts Action 9: Identify Other Means to Drive Adoption of the Identity Ecosystem across the Nation

    Read the article

  • Use Expressions with LINQ to Entities

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Recently I've been putting together a generic approach for paging the response from a WCF service. Paging changes the service signature, so it's not as simple as adding a behavior to an existing service in config, but the complexity of the paging is isolated in a generic base class. We're using the Entity Framework talking to SQL Server, so when we ask for a page using LINQ's .Take() method we get a nice efficient SQL query for just the rows we want, with minimal impact on SQL Server and network traffic. We use the maximum ID of the record returned as a high-water mark (rather than using .Skip() to go to the next record), so the approach caters for records being deleted between page requests. In the paged response we include a HasMorePages indicator, computed by comparing the max ID in the page of results to the max ID for the whole resultset - if the latter is bigger, then there are more pages. In some quick performance testing, the paged version of the service performed much more slowly than the unpaged version, which was unexpected. We narrowed it down to the code which gets the max ID for the full resultset - instead of building an efficient MAX() SQL query, EF was returning the whole resultset and then computing the max ID in the service layer. It's easy to reproduce - take this AdventureWorks query:             var context = new AdventureWorksEntities();             var query = from od in context.SalesOrderDetail                         where od.ModifiedDate >= modified                          && od.SalesOrderDetailID.CompareTo(id) > 0                         orderby od.SalesOrderDetailID                         select od;   We can find the maximum SalesOrderDetailID like this:             var maxIdEfficiently = query.Max(od => od.SalesOrderDetailID);   which produces our efficient MAX() SQL query. If we're doing this generically and we already have the ID function in a Func:             Func<SalesOrderDetail, int> idFunc = od => od.SalesOrderDetailID;             var maxIdInefficiently = query.Max(idFunc);   This fetches all the results from the query and then runs the Max() function in code. If you look at the difference in Reflector, the first call passes an Expression to the Max(), while the second call passes a Func. So it's an easy fix - wrap the Func in an Expression:             Expression<Func<SalesOrderDetail, int>> idExpression = od => od.SalesOrderDetailID;             var maxIdEfficientlyAgain = query.Max(idExpression);   - and we're back to running an efficient MAX() statement. Evidently the EF provider can dissect an Expression and build its equivalent in SQL, but it can't do that with Funcs.

    Read the article

  • Download files from a SharePoint site using the RSSBus SSIS Components

    - by dataintegration
    In this article we will show how to use a stored procedure included in the RSSBus SSIS Components for SharePoint to download files from SharePoint. While the article uses the RSSBus SSIS Components for SharePoint, the same process will work for any of our SSIS Components. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task. Step 3: Add an RSSBus SharePoint Source to the Data Flow Task. Step 4: In the RSSBus SharePoint Source, add a new Connection Manager, and add your credentials for the SharePoint site. Step 5: Now from the Table or View dropdown, choose the name of the Document Library that you are going to back up and close the wizard. Step 6: Add a Script Component to the Data Flow Task and drag an output arrow from the 'RSSBus SharePoint Source' to it. Step 7: Open the Script Component, go to edit the Input Columns, and choose all the columns. Step 8: This will open a new Visual Studio instance, with a project in it. In this project add a reference to the RSSBus.SSIS2008.SharePoint assembly available in the RSSBus SSIS Components for SharePoint installation directory. Step 9: In the 'ScriptMain' class, add the System.Data.RSSBus.SharePoint namespace and go to the 'Input0_ProcessInputRow' method (this method's name may vary depending on the input name in the Script Component). Step 10: In the 'Input0_ProcessInputRow' method, you can add code to use the DownloadDocument stored procedure. Below we show the sample code: String connString = "Offline=False;Password=PASSWORD;User=USER;URL=SHAREPOINT-SITE"; String downloadDir = "C:\\Documents\\"; SharePointConnection conn = new SharePointConnection(connString); SharePointCommand comm = new SharePointCommand("DownloadDocument", conn); comm.CommandType = CommandType.StoredProcedure; comm.Parameters.Clear(); String file = downloadDir+Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@File", file)); String list = Row.ServerUrl.ToString().Split('/')[1].ToString(); comm.Parameters.Add(new SharePointParameter("@Library", list)); String remoteFile = Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@RemoteFile", remoteFile)); comm.ExecuteNonQuery(); After saving your changes to the Script Component, you can execute the project and find the downloaded files in the download directory. SSIS Sample Project To help you with getting started using the SharePoint Data Provider within SQL Server SSIS, download the fully functional sample package. You will also need the SharePoint SSIS Connector to make the connection. You can download a free trial here. Note: Before running the demo, you will need to change your connection details in both the 'Script Component' code and the 'Connection Manager'.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >