Search Results

Search found 16078 results on 644 pages for 'oracles social services'.

Page 544/644 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • apt-get upgrade stuck at the same package (openjdk-6-jre-headless)

    - by decibyte
    I'm stuck, can't upgrade my system. Running sudo apt-get upgrade gives me the following: mmm@alalunga:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ginn libgrip0 linux-generic-pae linux-headers-generic-pae linux-image-generic-pae The following packages will be upgraded: apport apport-gtk bind9-host build-essential dhcp3-client dhcp3-common dnsutils eog evince evince-common firefox firefox-branding firefox-dbg firefox-globalmenu firefox-gnome-support firefox-locale-en gimp gimp-data gir1.2-totem-1.0 glib-networking glib-networking-common glib-networking-services gnupg gpgv icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-6-plugin icedtea-netx icedtea-netx-common icedtea-plugin isc-dhcp-client isc-dhcp-common libapache2-mod-php5 libart-2.0-2 libbind9-80 libdns81 libevince3-3 libgimp2.0 libisc83 libisccc80 libisccfg82 liblwres80 libssl-dev libssl-doc libssl1.0.0 libtotem0 linux-firmware linux-libc-dev openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssl php-pear php5-cli php5-common php5-curl php5-dev php5-gd php5-mysql php5-xsl policykit-1-gnome python-apport python-django python-gst0.10 python-problem-report resolvconf thunderbird thunderbird-globalmenu thunderbird-gnome-support totem totem-common totem-mozilla totem-plugins xserver-xorg-input-synaptics 74 upgraded, 0 newly installed, 0 to remove and 5 not upgraded. Need to get 317 MB/327 MB of archives. After this operation, 1.481 kB of additional disk space will be used. Do you want to continue [Y/n]? Get:1 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:2 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:3 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:4 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:5 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:6 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:7 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] 9% [7 openjdk-6-jre-headless 27,3 MB/27,3 MB 100%] It keeps downloading the package openjdk-6-jre-headless, then does nothing for a while (hanging on what's the last line above), then download the package again. It's at its 13th download attempt at the moment of writing. The actual downloads seem to be done just fine, but whatever it does after downloading seems to be failing. I tried removing openjdk-6, but then it wanted to install openjdk-7 instead, with the same result, hanging at openjdk-7-jre-headless instead. I also tried changing servers from my local (Danish) to the main server. No luck. It's also keeping me from upgrading alle the other packages. What to do?

    Read the article

  • The Other Side of XBRL

    - by john.orourke(at)oracle.com
    With the United States SEC's mandate for XBRL filings entering its third year, and impacting over 7000 additional companies in 2011, there's a lot of buzz in the industry about how companies should address the new reporting requirements.  Should they outsource the XBRL tagging process to a third party publisher, handle the process in-house with a bolt-on XBRL tool, or should they integrate XBRL tagging with the financial close and reporting process?  Oracle is recommending the latter approach, in fact  here's a link to a recent webcast that I did with CFO.com on this topic: http://www.cfo.com/webcasts/index.cfm/l_eventarchive/14548560 But production of XBRL-based filings is only half of the story. The other half is consumption of XBRL by regulators, academics, financial analysts and investors.  As I mentioned in my December article on the XBRL US conference, the feedback from these groups is that they are not really leveraging XBRL for analysis of companies due to a lack of tools and historic XBRL-based data on public companies.   The good news here is that the historic data problem is getting better as large, accelerated filers enter their third year of XBRL filings.  And the situation is getting better on the reporting and analysis tools side of the equation as well - and Oracle is leading the way. In early January, Oracle released the Oracle XBRL Extension for Oracle Database 11g.  This is a "no cost option" on top of the latest Oracle Database 11.2.0.2.0 release. With this added functionality organizations will have the ability to create one or more back-end XBRL repositories based on Oracle Database, which provide XBRL storage and query-ability with a set of XBRL-specific services.  The XBRL Extension to Oracle XML DB integrates easily with Oracle Business Intelligence Suite Enterprise Edition (OBIEE) for analytics and with interactive development environments (IDEs) and design tools for creating and editing XBRL taxonomies. The Oracle XBRL Extension to Oracle Database 11g should be attractive to regulators, stock exchanges, universities and other organizations that need to collect, analyze and disseminate XBRL-based filings.  It should also be attractive to organizations that produce XBRL filings, and need a way to store and compare their own XBRL-based financial filings to those of their peers and competitors. If you would like more information, here's a link to a web page on the Oracle Technology Network with the details about Oracle XBRL Extension for Oracle Database 11g, including data sheet, white paper, presentation, demos and other information: http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html

    Read the article

  • Particle System in XNA - cannot draw particle

    - by Dave Voyles
    I'm trying to implement a simple particle system in my XNA project. I'm going by RB Whitaker's tutorial, and it seems simple enough. I'm trying to draw particles within my menu screen. Below I've included the code which I think is applicable. I'm coming up with one error in my build, and it is stating that I need to create a new instance of the EmitterLocation from the particleEngine. When I hover over particleEngine.EmitterLocation = new Vector2(Mouse.GetState().X, Mouse.GetState().Y); it states that particleEngine is returning a null value. What could be causing this? /// <summary> /// Base class for screens that contain a menu of options. The user can /// move up and down to select an entry, or cancel to back out of the screen. /// </summary> abstract class MenuScreen : GameScreen ParticleEngine particleEngine; public void LoadContent(ContentManager content) { if (content == null) { content = new ContentManager(ScreenManager.Game.Services, "Content"); } base.LoadContent(); List<Texture2D> textures = new List<Texture2D>(); textures.Add(content.Load<Texture2D>(@"gfx/circle")); textures.Add(content.Load<Texture2D>(@"gfx/star")); textures.Add(content.Load<Texture2D>(@"gfx/diamond")); particleEngine = new ParticleEngine(textures, new Vector2(400, 240)); } public override void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) { base.Update(gameTime, otherScreenHasFocus, coveredByOtherScreen); // Update each nested MenuEntry object. for (int i = 0; i < menuEntries.Count; i++) { bool isSelected = IsActive && (i == selectedEntry); menuEntries[i].Update(this, isSelected, gameTime); } particleEngine.EmitterLocation = new Vector2(Mouse.GetState().X, Mouse.GetState().Y); particleEngine.Update(); } public override void Draw(GameTime gameTime) { // make sure our entries are in the right place before we draw them UpdateMenuEntryLocations(); GraphicsDevice graphics = ScreenManager.GraphicsDevice; SpriteBatch spriteBatch = ScreenManager.SpriteBatch; SpriteFont font = ScreenManager.Font; spriteBatch.Begin(); // Draw stuff logic spriteBatch.End(); particleEngine.Draw(spriteBatch); }

    Read the article

  • What does the Spring framework do? Should I use it? Why or why not?

    - by sangfroid
    So, I'm starting a brand-new project in Java, and am considering using Spring. Why am I considering Spring? Because lots of people tell me I should use Spring! Seriously, any time I've tried to get people to explain what exactly Spring is or what it does, they can never give me a straight answer. I've checked the intros on the SpringSource site, and they're either really complicated or really tutorial-focused, and none of them give me a good idea of why I should be using it, or how it will make my life easier. Sometimes people throw around the term "dependency injection", which just confuses me even more, because I think I have a different understanding of what that term means. Anyway, here's a little about my background and my app : Been developing in Java for a while, doing back-end web development. Yes, I do a ton of unit testing. To facilitate this, I typically make (at least) two versions of a method : one that uses instance variables, and one that only uses variables that are passed in to the method. The one that uses instance variables calls the other one, supplying the instance variables. When it comes time to unit test, I use Mockito to mock up the objects and then make calls to the method that doesn't use instance variables. This is what I've always understood "dependency injection" to be. My app is pretty simple, from a CS perspective. Small project, 1-2 developers to start with. Mostly CRUD-type operations with a a bunch of search thrown in. Basically a bunch of RESTful web services, plus a web front-end and then eventually some mobile clients. I'm thinking of doing the front-end in straight HTML/CSS/JS/JQuery, so no real plans to use JSP. Using Hibernate as an ORM, and Jersey to implement the webservices. I've already started coding, and am really eager to get a demo out there that I can shop around and see if anyone wants to invest. So obviously time is of the essence. I understand Spring has quite the learning curve, plus it looks like it necessitates a whole bunch of XML configuration, which I typically try to avoid like the plague. But if it can make my life easier and (especially) if make it can make development and testing faster, I'm willing to bite the bullet and learn Spring. So please. Educate me. Should I use Spring? Why or why not?

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-05

    - by Bob Rhubart
    Why is enterprise software often so complicated? | Rajesh Raheja rraheja.wordpress.com Rajesh Raheja shares "a few examples of requirements that lead to creation of complex platform infrastructures that up the complex enterprise software." Educause Top-Ten IT Issues - the most change in a decade or more | Cole Clark blogs.oracle.com Cole Clark discusses why "higher education IT must change in order to fully realize the potential for transforming the institution, and therefore it's people must learn new skills, understand and accept new ways of solving problems, and not be tied down by past practices or institutional inertia." Oracle VM RAC template - what it took | Wim Coekaerts blogs.oracle.com Wim Coekaerts shares an example that shows how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. Oracle Cloud and Oracle Platinum Services Announcements oracle.com Featuring Larry Ellison and Mark Hurd. Wednesday, June 06, 2012. 1:00 p.m. PT – 2:30 p.m. PT Creating an Oracle Endeca Information Discovery 2.3 Application Part 1 : Scoping and Design | Mark Rittman www.rittmanmead.com Oracle ACE Director Mark Rittman launches a new series that dives into "the various stages in building a simple Oracle Endeca Information Discovery application, using the recent Endeca Information Discovery 2.3 release." Introducing Decision Tables in the SOA Suite 11g | Lucas Jellama technology.amis.nl Oracle ACE Director Lucas Jellema demonstrates how "the decision table can be put to good use to implement the business logic behind the classical game of Rock, Paper and Scissors." Application integration: reorganise, recycle, repurpose | Andrew Clarke radiofreetooting.blogspot.com "Integration is a topic which is in everybody's baliwick," says Oracle ACE Andrew Clarke. "The business people want to get the best value from their existing IT investments. The architects need to understand the interfaces between the silos and across the layers. The developers have to implement it." Using XA Transactions in Coherence-based Applications | Jonathan Purdy blogs.oracle.com Purdy shares "a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager." Thought for the Day "The difficulty lies, not in the new ideas, but in escaping from the old ones..." — John Maynard Keynes (June 5, 1883 - April 4, 1946) Source: Quotations Page

    Read the article

  • Exchange 2010, Exchange 2003 Mail Flow issue

    - by Ryan Roussel
    While performing the initial Exchange 2010 deployment for a customer migrating from Exchange 2003, I ran into an issue with mail flow between the two environments.  The Exchange 2003 mailboxes could send to Exchange 2010, as well as to and from the internet.  Exchange 2010 mailboxes could send and receive to the internet, however they could not send to Exchange 2003 mailboxes.   After scouring the internet for a solution, it seemed quite a few people were experiencing this issue with no resolution to be found, or at least not easily.  After many attempts of manually deleting and recreating the routing group connectors,  I finally lucked onto the answer in an obscure comment left to another blogger.   If inheritable permissions are not allowed on the Exchange 2003 object in the Active Directory schema, exchange server authentication cannot be achieved between the servers.   It seems when Blackberry Enterprise Server gets added to 2003 environments, a lot of Admins get tricky and add the BES Admin user explicitly to the server object  to allow  inheritance down from there to all mailboxes.  The problem is they also coincidently turn off inheritance to the server object itself from its parent containers.  You can re-establish inheritance without overwriting the existing ACL however so that the BES Admin can remain in the server object ACL.   By re-establishing inheritance to the 2003 server object, mail flow was instantly restored between the servers.    To re-establish inheritance: 1. Open ASDIedit by adding the snap-in to a MMC (should be included on your 2008 server where Exchange 2010 is installed) 2. Navigate to Configuration > Services > Microsoft Exchange > Exchange Organization > Administrative Groups > First Administrative Group > Servers 3. In the right pane, right click on the CN=Server Name of your Exchange 2003 Server, select properties 4. Navigate to the Security tab, hit advanced toward the bottom. 5. Check the checkbox that reads “include inheritable permissions” toward the bottom of the dialogue box.

    Read the article

  • Managing accounts on a private website for a real-life community

    - by Smudge
    Hey Pro Webmasters, I'm looking at setting-up a walled-in website for a real-life community of people, and I was wondering if anyone has any experience with managing member accounts for this kind of thing. Some conditions that must be met: This community has a set list of real-life members, each of whom would be eligible for one account on the website. We don't expect or require that they all sign-up. It is purely opt-in, but we anticipate that many of them would be interested in the services we are setting up. Some of the community members emails are known, but some of them have fallen off the grid over the years, so ideally there would be a way for them to get back in touch with us through the public-facing side of the site. (And we'd want to manually verify the identity of anyone who does so). Their names are known, and for similar projects in the past we have assigned usernames derived from their real-life names. This time, however, we are open to other approaches, such as letting them specify their own username or getting rid of usernames entirely. The specific web technology we will use (e.g. Drupal, Joomla, etc) is not really our concern right now -- I am more interested in how this can be approached in the abstract. Our database already includes the full member roster, so we can email many of them generated links to a page where they can create an account. (And internally we can require that these accounts be paired with a known member). Should we have them specify their own usernames, or are we fine letting them use their registered email address to log-in? Are there any paradigms for walled-in community portals that help address security issues if, for example, one of their email accounts is compromised? We don't anticipate attempted break-ins being much of a threat, because nothing about this community is high-profile, but we do want to address security concerns. In addition, we want to make the sign-up process as painless for the members as possible, especially given the fact that we can't just make sign-ups open to anyone. I'm interested to hear your thoughts and suggestions! Thanks!

    Read the article

  • Workflow 4.5 is Awesome, cant wait for 5.0!

    - by JoshReuben
    About 2 years ago I wrote a blog post describing what I would like to see in Workflow vnext: http://geekswithblogs.net/JoshReuben/archive/2010/08/25/workflow-4.0---not-there-yet.aspx At the time WF 4.0 was a little rough around the edges – the State Machine was on codeplex and people were simulating state machines with Flowcharts. Last year I built a near- realtime machine management system using WF 4.0.1 – its managing the internal operations of this device: http://landanano.com/products/commercial   Well WF 4.5 has come a long way – many of my gripes have been addressed: C# expressions - no more VB 'AndAlso' clauses state machine awesomeness - can query current state many designer improvements - Document Outline is so much more succinct than Designer! Separate WCF Service Contract interfaces and ability to generate activities from contract operations ability to rehydrate to updated flow definitions via DynamicUpdateMap and WorkflowIdentity you can read about the new features here: http://msdn.microsoft.com/en-us/library/hh305677(VS.110).aspx   2013 could be the year of Workflow evangelism for .NET, as it comes together as the DSL language. Eg on Azure it could be used to graphically orchestrate between WebRoles, WorkerRoles and AppFabric Queues and the ServiceBus – that would be grand.   Here’s a list of things I’d like to see in Workflow 5.0: Stronger Parallelism support for true multithreaded workflows . A Workflow executes on a single thread – wouldn’t it be great if we had the ability to model TPL DataFlow? Parallel is not really parallel, just allows AsyncCodeActivity.     support for recursion an ExpressionTree activity with an editor design surface a math activity pack return of application level protocol (3.51 WF services) – automatically expose a state machine as a WCF service with bookmark Receive activities generated from OperationContract automatically placed in state transition triggers. A new HTML5 ActivityDesigner control – support with different CSS3  skinnable hooks,  remote connectivity (had to roll my own) A data flow view – crucial to understanding the big picture Ability to refactor a Sequence to custom activity in a separate .xaml file – like Expression Blend does for UserControl state machine global error handling - if all states goto an error state, you quickly get visual spagetti. Now you could nest a state machine, but what if you want an application level protocol whereby each state exposes certain WCF ops. DSL RAD editing - Make the Document Outline into a DSL editor for adding activities  – For WF to really succeed as a higher level of abstraction, It needs to be more productive than raw coding - drag & drop on the designer is currently too slow compared to just typing code. Extensible Wizard API - for pluggable WF editor experience other execution models beyond Sequence, Flowchart & StateMachine: SSIS, Behavior Trees,  Wolfram Model tool – surprise us! improvements to Designer debugging API - SourceLocation is tied to XAML file line number and char position, and ModelService access seems convoluted - why not leverage WPF LogicalTreeHelper / VisualTreeHelper ? Workflow Team , keep on rocking!

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 1: Overview

    - by Your DisplayName here!
    A lot has been written already about passive federation and integration of WIF and ADFS 2 into web apps. The whole active/WS-Trust feature area is much less documented or covered in articles and blogs. Over the next few posts I will try to compile all relevant information about the above topics – but let’s start with an overview. ADFS 2 has a number of endpoints under the /services/trust base address that implement the WS-Trust protocol. They are grouped by the WS-Trust version they support (/13 and /2005), the client credential type (/windows*, /username*, /certificate*) and the security mode (*transport, *mixed and message). You can see the endpoints in the MMC console under the Service/Endpoints page. So in other words, you use one of these endpoints (which exactly depends on your configuration / system setup) to request tokens from ADFS 2. The bindings behind the endpoints are more or less standard WCF bindings, but with SecureConversation (establishSecurityContext) disabled. That means that whenever you need to programmatically talk to these endpoints – you can (easily) create client bindings that are compatible. Another option is to use the special bindings that come with WIF (in the Microsoft.IdentityModel.Protocols.WSTrust.Bindings namespace). They are already pre-configured to be compatible with the ADFS endpoints. The downside of these bindings is, that you can’t use them in configuration. That’s definitely a feature request of mine for the next version of WIF. The next important piece of information is the so called Federation Service Identifier. This is the value that you (at least by default) have to use as a realm/appliesTo whenever you are requesting a token for ADFS (e.g. in  IdP –> RSTS scenario). Or (even more) technically speaking, ADFS 2 checks for this value in the audience URI restriction in SAML tokens. You can get to this value by clicking the “Edit Federation Service Properties” in the MMC when the Service tree-node is selected. OK – I will come back to this basic information in the following posts. Basically I want to go through the following scenarios: ADFS in the IdP role ADFS in the R-STS role (with a chained claims provider) Using the WCF bindings for automatic token issuance Using WSTrustChannelFactory for manual token handling Stay tuned…

    Read the article

  • WebLogic Partner Community Newsletter June 2012

    - by JuergenKress
    Dear WebLogic partner community member Happy New fiscal Year FY13 - thanks for the FY12 middleware business! Our WebLogic Partner Community grew very fast to 800+ members! To continue our joint successful business in the new fiscal year our top priorities in FY13 are: Become trained:the next opportunity are the summer camps in Lisbon & Munich or our on-demand training WebLogic 12c & ExaLogic & ADF see our detailed training calendar below. Run your marketing & sales campaign: sales kits, marketing kits, solution catalog add your services to oracle.com, add your events to oracle.com and advertisement Get recognized: OFM awards, partner excellence awards & references & plaques Become Specialized: All of the above makes the Oracle WebLogic 12c & ExaLogic & ADF Specialization! Make sure you get your Specialization benefits! Topics: Key product focus areas will be: ias to WebLogic & ExaLogic, ADF mobile and Oracle Java Cloud platform. Get a sneak preview of our FY13 sales plays (Oracle and Partner confidential) If you can not attend our Summer Camps and our WebLogic 12c Bootcamps please register for the on-demand Oracle WebLogic Server 12c Sales Boot Camp & Oracle WebLogic Server 12c PreSales Boot Camp and the WebLogic Server: Diagnosing Performance Webcast From June 1st 2012 ExaLogic Specialization is mandatory for re-sell! To support you with your opportunities we published the ExaLogic Kit & Cloud Application Foundation kit which includes sales ppt presentations and technical details! We are also highly interested to run a joint iAS to WebLogic upgrade marketing campaign! See you in Lisbon! Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsJunea2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,WebLogic,WebLogic Community,OPN,Oracle,Jürgen Kress,WebLogic 12c,Fusion Middleware Innovation Awards 2012,SPCEjEnterprise 2012 Benchmark,WebLogic Benchmark Sun,Java training,WebLogic advisor webcast

    Read the article

  • Legality of similar games

    - by Jamie Taylor
    This is my first question on GD.SE, and I hope it's in the right place. A little background: I'm an amateur (read: not explicitly employed to develop games, but am employed as a software developer) game developer and took a ComSci with Games Development degree. My Question: What is the legal situation/standpoint of creating a copycat title? I know that there are only N number of ways of solving a problem, and N number of ways to design a piece of software. Say that an independent developer designed a copycat game (a Tetris clone in this example) for instance, and decided to use that game to generate income for themselves as well as interest for their other products. Say the developer adds a disclaimer into the software along the lines of "based on , originally released c. by ." Are there any legal problems/grey areas with the developer in this example releasing this game, commercially? Would they run into legal problems? Should the developer in this example expect cease and desist orders or law suit claims from original publishers? Have original publishers been known to, effectively, kill independent projects because they are a little too close to older titles? I know that there was, at least, one attempt by a group of independent developers to remake Sonic the Hedgehog 2 and Sega shut them down. I also know of Sega shutting down development of the independent Streets of Rage Remake. I know that "but it's an old game, your honour," isn't a great legal standpoint when it comes to defending yourself. But, could an independent developer have a law suit filed against them for re-implementing an older title in a new way? I know that there are a lot of copycat versions of the older titles like Tetris available on app stores (and similar services), and that it would be very difficult for a major publisher to shut them all down. Regardless of this, is making a Tetris (or other game) copycat/clone illegal? We were taught lots of different things at University, but we never covered copyright law. I'm presuming that their thought behind it was "IF these students get jobs in games development, they wont need to know anything about the legal side of it, because their employers will have legal departments... presumably" tl;dr Is it illegal to create a clone or copycat of an old title, and make money from it?

    Read the article

  • cloud programming for OpenStack in C / C++

    - by Basile Starynkevitch
    (Sorry for such a fuzzy question, I am very newbie to cloud programming) I am interested in designing (and developing) a (free software) program in C or C++ (probably, most of it being meta-programmed, i.e. part of the C code code being generated). I am still in the thinking / designing phase. And I might perhaps give up. For reference, I am the main architect and implementor of GCC MELT, a domain specific language to extend the GCC compiler (the MELT language is translated to C/C++ and is bootstrapped: the MELT to C/C++ translator being written in MELT). And I am dreaming of extending it with some cloud computing abilities. But I am a newbie in cloud computing. (I am only interested in free-software, GPLv3 friendly, based cloud computing, which probably means openstack). I believe that "compiling on the cloud with some enhanced GCC" could make sense (for super-optimizations or static analysis of e.g. an entire Linux distribution, or at least a massive GCC compiled free software like Qt, GCC itself, or the Linux kernel). I'm dreaming of a MELT specific monitoring program which would store, communicate, and and enhance GCC compilation (extended by MELT). So the picture would be that each GCC process (actually the cc1 or cc1plus started by the gcc driver, suitably extended by some MELT extension) would communicate with some monitor. That "monitoring/persisting" program would run "on the cloud" (and probably manage some information produced by GCC e.g. on NoSQL bases). So, how should some (yet to be written) C program (some Linux daemon) be designed to be cloud-friendly? So far, I understood that it should provide some Web service, probably thru a RESTful service, so should use an HTTP server library like onion. And that OpenStack is able to start (e.g. a dozen of) such services. But I don't have a clear picture of what OpenStack brings. So far, I noticed the ability to manage (and distribute) virtual machines (with some Python API). It is less clear how can it distribute some ELF executable, how can it start it, etc. Do you have any references or examples of C / C++ programming on the cloud? How should a "cloud-friendly" (actually, OpenStack friendly) C/C++ server application be designed?

    Read the article

  • What Counts for a DBA: Passion

    - by drsql
    One of my first questions, when interviewing for a DBA/Programmer position, is always: “Why do you want this job?” The answers I receive range from cheesy hyperbole (“I want to enhance your services with my vast knowledge”) to deadpan realism (“I have N kids who all have a hole in the front of their face where food goes"). Both answers are fine in their own way, at least displaying some self-confidence, humour and honesty, but once in a while, I'll hear the answer that is music to me ears... “I LOVE DATABASES!” Whenever I hear it, my nerves tingle in hopeful anticipation; have I found someone for whom working with database isn't just a job, but a passion? Inevitably, I'm often disappointed. What initially seemed like passion turns out to be rather shallow enthusiasm; the person is enthusiastic about working with databases in the same way he or she might be about eating a bag of Cajun spiced kettle chips; enjoyable, but not something to think about too deeply or take too seriously. Enthusiasm comes, and enthusiasm goes. I've seen countless technical forum users burst onto the scene in a blaze of frantic question-answering, only to fade away within days, never to be heard from again. Passion, however, is more of a longstanding commitment. The biographies of the great technologists and authors of the recent past are full of the sort of passion and engrossment that lead a person to write a novel non-stop for a fortnight with no sleep and only dog food to eat (Philip K. Dick), or refuse to leave the works of the first tunnel under the Thames, even though it was flooded (Brunel). In a similar (though more modest) way, my passion for working with databases has led me to acts that might cause someone for whom it was "just a job" to roll their eyes in disbelief. Most evenings you're more likely to find me reading a database book than watching TV. I've spent hundreds of hours of my spare time writing blogs and articles (some of which are only read by tens of people); I've spent hundreds of dollars travelling to conferences, paying my own flight and hotel expenses, so that I can share a little of what I know, and mix with some like-minded people. And I know I'm far from alone in this, in the SQL Server community. Passion isn't everything, of course, and it isn't always accompanied by any great skill, but in almost every case, that skill can be cultivated over time. If you are doing what you are passionate about, work turns into more than just a way to feed your kids; it becomes your hobby, entertainment, and preoccupation. And it is this passion that gives a DBA the obsessive stubbornness, the refusal to be beaten by even the most difficult problem, which is often so crucial. A final word of warning though: passion without limits can turn weird. Never let it get in the way of your wife, kids, bills, or personal hygiene.

    Read the article

  • BYOD-The Tablet Difference

    - by Samantha.Y. Ma
    By Allison Kutz, Lindsay Richardson, and Jennifer Rossbach, Sales Consultants Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Less than three years ago, Apple introduced a new concept to the world: The Tablet. It’s hard to believe that in only 32 months, the iPad induced an entire new way to do business. Because of their mobility and ease-of-use, tablets have grown in popularity to keep up with the increasing “on the go” lifestyle, and their popularity isn’t expected to decrease any time soon. In fact, global tablet sales are expected to increase drastically within the next five years, from 56 million tablets to 375 million by 2016. Tablets have been utilized for every function imaginable in today’s world. With over 730,000 active applications available for the iPad, these tablets are educational devices, portable book collections, gateways into social media, entertainment for children when Mom and Dad need a minute on their own, and so much more. It’s no wonder that 74% of those who own a tablet use it daily, 60% use it several times a day, and an average of 13.9 hours per week are spent tapping away. Tablets have become a critical part of a user’s personal life; but why stop there? Businesses today are taking major strides in implementing these devices, with the hopes of benefiting from efficiency and productivity gains. Limo and taxi drivers use tablets as payment devices instead of traditional cash transactions. Retail outlets use tablets to find the exact merchandise customers are looking for. Professors use tablets to teach their classes, and business professionals demonstrate solutions and review reports from tablets. Since an overwhelming majority of tablet users have started to use their personal iPads, PlayBooks, Galaxys, etc. in the workforce, organizations have had to make a change. In many cases, companies are willing to make that change. In fact, 79% of companies are making new investments in mobility this year. Gartner reported that 90% of organizations are expected to support corporate applications on personal devices by 2014. It’s not just companies that are changing. Business professionals have become accustomed to tablets making their personal lives easier, and want that same effect in the workplace. Professionals no longer want to waste time manually entering data in their computer, or worse yet in a notebook, especially when the data has to be later transcribed to an online system. The response: the Bring Your Own Device phenomenon. According to Gartner, BOYD is “an alternative strategy allowing employees, business partners and other users to utilize a personally selected and purchased client device to execute enterprise applications and access data.” Employees whose companies embrace this trend are more efficient because they get to use devices they are already accustomed to. Tablets change the game when it comes to how sales professionals perform their jobs. Sales reps can easily store and access customer information and analytics using tablet applications, such as Oracle Fusion Tap. This method is much more enticing for sales reps than spending time logging interactions on their (what seem to be outdated) computers. Forrester & IDC reported that on average sales reps spend 65% of their time on activities other than selling, so having a tablet application to use on the go is extremely powerful. In February, Information Week released a list of “9 Powerful Business Uses for Tablet Computers,” ranging from “enhancing the customer experience” to “improving data accuracy” to “eco-friendly motivations”. Tablets compliment the lifestyle of professionals who strive to be effective and efficient, both in the office and on the road. Three Things Businesses Need to do to Embrace BYOD Make customer-facing websites tablet-friendly for consistent user experiences Develop tablet applications to continue to enhance the customer experience Embrace and use the technology that comes with tablets Almost 55 million people in the U.S. own tablets because they are convenient, easy, and powerful. These are qualities that companies strive to achieve with any piece of technology. The inherent power of the devices coupled with the growing number of business applications ensures that tablets will transform the way that companies and employees perform.

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • PeopleSoft Grants & the Federal Agency Letter of Credit Draw Changes

    - by Mark Rosenberg
    For decades, most, if not all, US Federal agencies that sponsor research allowed grant recipients to request and receive payments using pooled accounts, commonly known as pooled letter of credit (LOC) draws. This enabled organizations, such as universities and hospitals, fast and efficient access to reimbursement of the expenditures they incurred conducting research across a portfolio of grants. To support this business practice, the PeopleSoft Grants solution has delivered an LOC Draw report to provide the total request amount along with all of the supporting invoice details for reconciliation and audit purposes. Now, in an attempt to provide greater transparency, eliminate fraud, strengthen accountability for grant-related financial transactions, and simplify grant award closeout, many US Federal sponsors are transitioning from the “pooling” letter of credit draw method to requesting on a “grant-by-grant” basis. The National Science Foundation, the second largest issuer of Federal awards, already transitioned to detailed grant draws in 2013. And, in response to the U.S. Department of Health and Human Services (HHS) directive to HHS-supported Agencies, the largest Federal awards sponsor, the National Institutes of Health (NIH), will fully transition to the new HHS subaccount draw method. This will require NIH award recipients to request payments based on actual expenses incurred on an award-by-award basis. NIH is expected to fully transition to this new draw method by the end of Federal fiscal year 2015.  (The NIH had planned to fully transition to this new method by the end of fiscal 2014; however, the impact to institutions was deemed to be significant enough that a reprieve was recently granted.) In light of these new Federal draw requirements, we have recently released these new features to aid our customers on both PeopleSoft Grants releases 9.1 and 9.2:1. Federal Award Identification Number on the Proposal and Award Profile 2. Letter of credit fields on contract lines to support award basis draws and comply with Federal close out mandates3. Process to produce both pro forma and final LOC Draw Reports in BI Publisher report format4. Subacccount ID field on the LOC Summary and a new BI Publisher version of the LOC Summary report 5. Added Subaccount Field and contract info to be displayed on the LOC summary page6. Ability to generate by a variety of dimensions pro forma and invoiced draw listings 7. Queries for generation and manipulation of data to upload into sponsor payment request systems and perform payment matching8. Contracts LOC Close Out query to quickly review final balances prior to initiating final draws and preparing Federal Financial Reports prior to close The PeopleSoft Development team actively monitors this and other major Federal changes and continues working closely with the Grants Product Advisory Group of the Higher Education User Group to ensure a clear understanding of what our customers need in order to transition to new approaches for doing business with the Federal government. For more information regarding the enhancements to the PeopleSoft Grants solution, existing customers can login to My Oracle Support and review the Enhancements to Letter of Credit Process (Doc ID 1912692.1) associated with resolution ID 904830. This enhanced LOC functionality is available in both PeopleSoft FSCM 9.1 Bundle #31 and PeopleSoft FSCM 9.2 Update Image 8.

    Read the article

  • How do you automate a SharePoint 2010 deployment?

    - by Enrique Lima
    In the last couple of months SharePoint traffic (consulting, training and speaking) has picked up.  And with that also the requests for deployments.  There are good, great, bad and really bad things around this. But that is for another topic.  However part of the good and great has been the fact of organizations wanting to do a proof of concept deployment (even when WSS or MOSS has been deployed). We can go through a session (Microsoft has the SDPS concept, SharePoint Deployment Planning Services) of discovering what the customer wants to achieve from their investment in the platform and then also proceed to model the solution that would fit their needs.  But it should not stop there.  The next step should be a POC (as many have requested) to test out. Now, on to the meat of this post.  How do I deploy?  While it is a good process to watch and see all of it take place, not many have the time to sit through that.  Even more so, when that has been part of the description of deploying the platform in the sessions mentioned above. I will, though, break it into a deployment for development purposes and a deployment of a farm. Two tools (or scripts) for those two different types of deployment. First, let me address the development environment.  Around the last week in October, Chris Johnson (SharePoint Product Team) announced a SharePoint Easy Setup for Developers.  The kit itself will assist you in installing SharePoint Server (in standalone mode), the tools that go around Visual Studio, Expression Studio and the Office 2010 tools. Here is the link to Chris’ post: http://blogs.msdn.com/b/cjohnson/archive/2010/10/28/announcing-sharepoint-easy-setup-for-developers.aspx The other scenario is the use of a script in assisting you through the deployment of a farm. Now, this is not to override planning.  It should highlight the need for planning even more.  How?  Having your service accounts planned, the structure of the sites and the scale of your deployment.  Enter AutoSPInstaller.  This is a CodePlex project, and the intent behind this is not only to automate the installation but to give some meaning and get some sense out of what goes on during a SharePoint deployment. How?  Take for example the creation of the databases, when we do the initial OOB deployment by using the wizard, more times than not, we leave the names as they are.  How is that a “bad thing”?  Let’s make it a better practice to rename those Databases, and have them take on a name that is not “GUID-ized”. Having a better naming convention will not hurt, on the other hand will allow for consistency. Here is the link to AutoSPInstaller’s site on CodePlex: http://autospinstaller.codeplex.com/

    Read the article

  • eSTEP Newsletter December 2012

    - by uwes
    Dear Partners,We would like to inform you that the December issue of our Newsletter is now available.The issue contains informations to the following topics: Notes from Corporate: It's Earth day - Every Day, Oracle SPARC Newsletter, Pre-Built Developer VMs (for Oracle VM VirtualBox), Oracle Database Appliance Now Certified by SAP, Database High Availability, Cultivating Business-Led Innovation Technical Corner: Geek Fest! Talking About the Design of the T4 and T5 SPARC Chips, Blog: Is This Your Idea of Disaster Recovery?; Oracle® Practitioner Guide - A Pragmatic Approach to Cloud Adoption; Oracle Practitioner Guide: A pragmatic Approach to Cloud Adoption; Darren Moffat Explains the new ZFS Encryption Features in Solaris 11.1; Command Summary: Basic Operations with the Image Packaging System; SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g; SPARC T4-4 Servers Set First World Record on PeopleSoft HCM 9.1 Benchmark; Sun ZFS Appliance Monitor Refresh: Core Factor Table; Remanufactured Systems Program for Sun Systems from Oracle; Reminder: Oracle Premier Support for Systems; Reminder: Oracle Platinum Services Learning & Events: eSTEP Events Schedule; Recently Delivered Techcasts; Webinar: Maximum Availibility with Oracle GoldenGate References: LUKOIL Overseas Holding Optimizes Oil Field Development Projects with Integrated Project Management; United Networks Increases Accounting Flexibility and Boosts System Performance with ERP Applications Upgrade; Ziggo Rapidly Creates Applications That Accelerate Communications-Service Orders l How to ...: The Role of Oracle Solaris Zones and Oracle Linux Containers in a Virtualization Strategy; How to Update to Oracle Solaris 11.1; Using svcbundle to Create Manifests and Profiles in Oracle Solaris 11.1; How to Migrate Your Data to Oracle Solaris 11 Using Shadow Migration; How to Script Oracle Solaris 11.1 Zones for Easy Cloning; How to Script Oracle Solaris 11 Zones Creation for a Network-in-a-Box Configuration; How to Know Whether T4 Crypto Accelerators Are in Use; Fault Handling and Prevention – Part 1; Transforming and Consolidating Web Data with Oracle Database; Looking Under the Hood at Networking in Oracle VM Server for x86; Best Way to Migrate Data from Legacy File System to ZFS in Oracle Solaris 11; Special Year End Article: The Top 10 Strategic CIO Issues For 2013 You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • Is SOAP Http POST more complicated than I thought

    - by Pete Petersen
    I'm currently writing a bit of code to send some xml data to a web service via HTTP POST. I thought this would be really simple and have written the following example code (C#) Console.WriteLine("Press enter to send data..."); while (Console.ReadLine() != "q") { HttpWebRequest httpWReq = (HttpWebRequest)WebRequest.Create(@"http://localhost:8888/"); Foo fooItem = new Foo { Member1 = "05", Member2 = "74455604", Member3 = "15101051", Member4 = 1, Member5 = "fsf", Member6 = 6.52, }; ASCIIEncoding encoding = new ASCIIEncoding(); string postData = fooItem.ToXml(); byte[] data = encoding.GetBytes(postData); httpWReq.Method = "POST"; httpWReq.ContentType = "application/xml"; httpWReq.ContentLength = data.Length; using (Stream stream = httpWReq.GetRequestStream()) { stream.Write(data, 0, data.Length); } HttpWebResponse response = (HttpWebResponse)httpWReq.GetResponse(); string responseString = new StreamReader(response.GetResponseStream()).ReadToEnd(); Console.WriteLine("Received " + responseString); Console.WriteLine("Press enter to send data..."); } This is all I thought would be necessary, however I have now been given the details for the web service. This included some information which is unfarmiliar to me and I'm unsure whether I need to include it. The information I was sent was <url>http://sometext/soap/rpc</url> <namespace>http://sometext/a.services</namespace> <method>receiveInfo</method> <parm-id>xmldata</parm-id> (Input data) (Actual XML data as string) <parm-id>status</parm-id> (Output data) <userid>user</userid> <password>pass</password> <secure>false</secure> I guess this means I need to include a username and password somehow, but I'm not sure what the namespace or method fields are used for. Could anyone give me a hint? Sorry I've never used webservices before.

    Read the article

  • Sync Google Calendar with SharePoint Calendar

    - by dataintegration
    The ADO.NET Providers for Google and SharePoint make it easy to retrieve and update data in both Google's web services and SharePoint. This article shows how the SQL interface to data makes it easy to build applications that need to move data from one source to another. The application described here is a demo Windows application that synchronizes calendar events between Google and SharePoint, but the RSSBus Providers can be used to achieve integrations on both the .NET and the Java platforms, including more sophisticated features like full automation. Getting the Events Step 1: Google accounts can have several calendars. Obtain a list of a user's Google Calendars by issuing a query to the Calendars table. For example: SELECT * FROM Calendars. Step 2: In order to get a list of the events from a given Google Calendar, issue a query to the CalendarEvents table while specifying the CalendarId from the Calendars table. The resulting events can be further filtered by using the StartDateTime or EndDateTime columns. For example: SELECT * FROM CalendarEvents WHERE (CalendarId = '[email protected]') AND (StartDateTime >= '1/1/2012') AND (StartDateTime <= '2/1/2012') Step 3: SharePoint stores data in Lists. There are various types of lists, e.g., document lists and calendar lists. A SharePoint account can have several lists of the same type. To find all the calendar lists in SharePoint, use the ListLists stored procedure and inspect the BaseTemplate column. Step 4: The SharePoint data provider models each SharPoint list as a table. Get the events in a particular calendar by querying the table with the same name as the list. The events may be filtered further by specifying the EventDate or EndDate columns. For example: SELECT * FROM Calendar WHERE (EventDate >= '1/1/2012') AND (EventDate <= '2/1/2012') Synchronizing the Events Synchronizing the events is a simple process. Once the events from Google and SharePoint are available they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete events as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for SharePoint V2, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the SharePoint ADO.NET Data Provider V2, which can be obtained here.

    Read the article

  • BI&EPM in Focus November 2013

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE IBM is Embracing Oracle Exalytics: The Velocity of Thought and Action (link) Customers Ambulance Victoria, Australia, uses analytics and modelling to serve the expanding needs of a growing population (link) Cablemás Selects Oracle to Speed Customer Data Insights (link) National Instruments Introduces New Business Intelligence Solutions—Runs Reports up to 30x Faster, and Expands Customer Insight (link) FLSmidth Ensures Precise, Transparent Financial Reporting at All Business Levels, Reduces Financial Consolidation Time by up to 40% (link) Enterprise Performance Management Partner Edgewater Ranzal Webinar Series Mitigate Your Biggest EPM Project Risk - Thursday, 21st November - Register here:  4.00 GMT Capital Planning in the Energy Industry - Tuesday, 26th November - Register here:  4.00 GMT Driving Value in the Retail Industry Using Hyperion Strategic Finance (HSF)  - Tuesday, 10th December - Register here:  7.00 GMT Dec 11, Look Smarter Selling Hyperion Profitability & Cost Management (HPCM) Webcast (link) EPM System Infrastructure Tips & Tricks Support: November EPM Patch Set Updates released Business Analytics Monthly Index - October 2013 Hyperion Smart View Assistance with OBIEE 11.1.1.7 Hyperion Disclosure Management 11.1.2.3.330 PSU 17444967 [Doc ID 1592645.1] Hyperion Financial Close Management (FCM) 11.1.2.3.100 PSU 16989110 [Doc ID 1592644.1] Business  Intelligence BI-Apps Whitepaper: Packaged Analytic Applications: Accelerating Time and Value By Wayne Eckerson (link) BI Apps Blog: A Closer Look at Oracle Price Analytics (link) Blog: Taking Your Business Scorecard Golfing (link) Blog: Practical Uses of Business Scorecards, from Company-Wide to Process Specific (link) Nov 19, Big Data at Work Series: How Delphi Harnesses Big Data to Improve Warranty Response & Customer Satisfaction (link) Rittman Mead Blog: Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration Support: OBIEE Suite Bundle Patches (understand OBIEE naming convention) [Doc ID 1591422.1] Support Blog: Java update alert: Essbase Administration Services (EAS) 11.1.2.3 (link) Support Blog: OBIEE 11.1.1.7.131017 now available (link) /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • As the current draft stands, what is the most significant change the "National Strategy for Trusted Identities in Cyberspace" will provoke?

    - by mfg
    A current draft of the "National Strategy for Trusted Identities in Cyberspace" has been posted by the Department of Homeland Security. This question is not asking about privacy or constitutionality, but about how this act will impact developers' business models and development strategies. When the post was made I was reminded of Jeff's November blog post regarding an internet driver's license. Whether that is a perfect model or not, both approaches are attempting to handle a shared problem (of both developers and end users): How do we establish an online identity? The question I ask here is, with respect to the various burdens that would be imposed on developers and users, what are some of the major, foreseeable implementation issues that will arise from the current U.S. Government's proposed solution? For a quick primer on the setup, jump to page 12 for infrastructure components, here are two stand-outs: An Identity Provider (IDP) is responsible for the processes associated with enrolling a subject, and establishing and maintaining the digital identity associated with an individual or NPE. These processes include identity vetting and proofing, as well as revocation, suspension, and recovery of the digital identity. The IDP is responsible for issuing a credential, the information object or device used during a transaction to provide evidence of the subject’s identity; it may also provide linkage to authority, roles, rights, privileges, and other attributes. The credential can be stored on an identity medium, which is a device or object (physical or virtual) used for storing one or more credentials, claims, or attributes related to a subject. Identity media are widely available in many formats, such as smart cards, security chips embedded in PCs, cell phones, software based certificates, and USB devices. Selection of the appropriate credential is implementation specific and dependent on the risk tolerance of the participating entities. Here are the first considered actionable components of the draft: Action 1: Designate a Federal Agency to Lead the Public/Private Sector Efforts Associated with Achieving the Goals of the Strategy Action 2: Develop a Shared, Comprehensive Public/Private Sector Implementation Plan Action 3:Accelerate the Expansion of Federal Services, Pilots, and Policies that Align with the Identity Ecosystem Action 4:Work Among the Public/Private Sectors to Implement Enhanced Privacy Protections Action 5:Coordinate the Development and Refinement of Risk Models and Interoperability Standards Action 6: Address the Liability Concerns of Service Providers and Individuals Action 7: Perform Outreach and Awareness Across all Stakeholders Action 8: Continue Collaborating in International Efforts Action 9: Identify Other Means to Drive Adoption of the Identity Ecosystem across the Nation

    Read the article

  • Download files from a SharePoint site using the RSSBus SSIS Components

    - by dataintegration
    In this article we will show how to use a stored procedure included in the RSSBus SSIS Components for SharePoint to download files from SharePoint. While the article uses the RSSBus SSIS Components for SharePoint, the same process will work for any of our SSIS Components. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task. Step 3: Add an RSSBus SharePoint Source to the Data Flow Task. Step 4: In the RSSBus SharePoint Source, add a new Connection Manager, and add your credentials for the SharePoint site. Step 5: Now from the Table or View dropdown, choose the name of the Document Library that you are going to back up and close the wizard. Step 6: Add a Script Component to the Data Flow Task and drag an output arrow from the 'RSSBus SharePoint Source' to it. Step 7: Open the Script Component, go to edit the Input Columns, and choose all the columns. Step 8: This will open a new Visual Studio instance, with a project in it. In this project add a reference to the RSSBus.SSIS2008.SharePoint assembly available in the RSSBus SSIS Components for SharePoint installation directory. Step 9: In the 'ScriptMain' class, add the System.Data.RSSBus.SharePoint namespace and go to the 'Input0_ProcessInputRow' method (this method's name may vary depending on the input name in the Script Component). Step 10: In the 'Input0_ProcessInputRow' method, you can add code to use the DownloadDocument stored procedure. Below we show the sample code: String connString = "Offline=False;Password=PASSWORD;User=USER;URL=SHAREPOINT-SITE"; String downloadDir = "C:\\Documents\\"; SharePointConnection conn = new SharePointConnection(connString); SharePointCommand comm = new SharePointCommand("DownloadDocument", conn); comm.CommandType = CommandType.StoredProcedure; comm.Parameters.Clear(); String file = downloadDir+Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@File", file)); String list = Row.ServerUrl.ToString().Split('/')[1].ToString(); comm.Parameters.Add(new SharePointParameter("@Library", list)); String remoteFile = Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@RemoteFile", remoteFile)); comm.ExecuteNonQuery(); After saving your changes to the Script Component, you can execute the project and find the downloaded files in the download directory. SSIS Sample Project To help you with getting started using the SharePoint Data Provider within SQL Server SSIS, download the fully functional sample package. You will also need the SharePoint SSIS Connector to make the connection. You can download a free trial here. Note: Before running the demo, you will need to change your connection details in both the 'Script Component' code and the 'Connection Manager'.

    Read the article

  • A starting point for Use Cases and User Stories

    - by Mike Benkovich
    Originally posted on: http://geekswithblogs.net/benko/archive/2013/07/23/a-starting-point-for-use-cases-and-user-stories.aspxSoftware is a challenging business and is rife with opportunities to go wrong. Over the years a number of methodologies have evolved to help make sure that things go right. In an effort to contribute to this I’ve created a list of user stories that I think should be included and sometimes are just assumed. Note this is a work in progress, so I’m looking for your feedback. I’m curious what you would add or change in my list. · As a DBA I am working with a Normalized data model that reflects an agreed upon logical model for the system · As a DBA I am using consistent names for my fields which match the naming standards of my organization · As a DBA my model supports simple CRUD operations against all the entities · As an Application Architect the UI has been validated against the Business requirements and a complete set of user story’s have been created · As an Application Architect the database model has been validated against the UI · As an Application Architect we have a logical business model that describes all the known and/or expected usage of the system during the software’s expected lifecycle · As an Application Architect we have a Deployment diagram that describes how the application components will be deployed · As an Application Architect we have a navigation diagram that describes the typical application flow · As an Application Architect we have identified points of interaction which describes how the UI interacts with the services and the data storage · As an Application Architect we have identified external systems which may now or in the future use the data of this application and have adapted the logical model to include these interactions · As an Application Architect we have identified existing systems and tools that can be extended and/or reused to help this application achieve it’s business goals · As a Project Manager all team members understand the goals of each release and iteration as they are planned · As a Project Manager all team members understand their role and the roles of others · As a Project Manager we have support of the business to do the right thing even if it is not the expedient thing · As a Test/QA Analyst we have created a simulation environment for testing the system which does not use sensitive data and accurately reflects the scenarios of all the data that will be supported by the system · As a Test/QA Analyst we have identified the matrix of supported clients used to access the system including the likely browsers, mobile devices and other interfaces to work with the application · As a Test/QA Analyst we have created exit criteria for each user story that match the requirements of the business story that was used to create them · As a Test/QA Analyst we have access to a Test environment that is isolated from production and staging environments · As a Test/QA Analyst there we have a way to reset the environment so we can rerun tests when a new version of the software becomes available · As a Test/QA Analyst I am able to automate portions of the test process Thoughts? -mike

    Read the article

  • Professional immigration

    - by etranger
    Hello all, Does anyone here have a practical advice on professional relocation from Russia to Europe? The reasons behind making such a decision are far beyond the subject, perhaps, so I'll stick to the practical part. Having done some of the "common stuff" for finding a job, I am now facing two serious problems: I am a "dual-class" person, with university degree in marketing, and multiple years of self-studied computer competence (hence my writing here). Have professional experience in both areas. I don't currently hold a European work permit. From what I can see, this results in normal HR person throwing out my CV as either being "overqualified" or "too much trouble with making the permit". I do have the skills and character to start my own business, but it requires start-up capital that I don't have, over the last years I had to pay high bills for medical treatment of my family member, who had deceased. Now, I'm almost out of debts. As you can probably guess, English is not a problem, and I'm open to new languages, but first steps of entering the market, or the society, is the problematic part. I live close to Norway, and am trying to get some professional contacts there, but it hasn't got me any practical perspective so far. Any advice is greatly appreciated. EDIT: I am currently making my living off web site development, and occasional consulting services both in IT and marketing. For purely geographic reasons I'm dealing with clients that reside in the same city where I live, pop. 350 000. Being quite local, market requirements for web sites are simple and stable — clients need to control navigation, write articles in a word-like editor, upload illustrations and place ad banners, all with no additional programming. As many web developers do, I'm using my own content management system that fits these expectations. I have also started developing a newer version of this system that has better support for international environments, but I'm too distant from the real market demand in Europe to speak of the right track here. Technically it's based on php/mysql and uses xslt for templating. It allows for quick website deployment, and has architectural neatness, lack of which made me abandon similar opensource solutions (Joomla and the like). Deploying time from rasterized design proofs is normally under 6-8 working hours, don't know how that compares to the world practice. EDIT 2: Can anyone share what Norwegian (Scandinavian) web solutions market currently demands?

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >