Search Results

Search found 25180 results on 1008 pages for 'post processing'.

Page 582/1008 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • SSH onto Ubuntu box using RSA keys

    - by jex
    I recently installed OpenSSH on one of my Ubuntu machines and I've been running into problems getting it to use RSA keys. I've generated the RSA key on the client (ssh-keygen), and appended the public key generated to both the /home/jex/.ssh/authorized_keys and /etc/ssh/authorized_keys files on the server. However, when I try to login (ssh -o PreferredAuthorizations=publickey jex@host -v [which forces the use of public key for login]) I get the following output: debug1: Host 'pentheon.local' is known and matches the RSA host key. debug1: Found key in /home/jex/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received Banner message debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: /home/jex/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /home/jex/.ssh/identity debug1: Trying private key: /home/jex/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey,keyboard-interactive). I'm not entirely sure where I've gone wrong. I am willing to post my /etc/ssh/sshd_config if needed.

    Read the article

  • Drinking Our Own Champagne: Fusion Accounting Hub at Oracle

    - by Di Seghposs
    A guest post by Corey West, Senior Vice President, Oracle's Corporate Controller and Chief Accounting Officer There's no better story to tell than one about Oracle using its own products with blowout success. Here's how this one goes. As you know, Oracle has increased its share of the software market through a number of high-profile acquisitions. Legally combining companies is a very complicated process -- it can take months to complete, especially for the acquisitions with offices in several countries, each with its own unique laws and regulations. It's a mission critical and time sensitive process to roll an acquired company's legacy systems (running vital operations, such as accounts receivable and general ledger (GL)) into the existing systems at Oracle. To date, we've run our primary financial ledgers in E-Business Suite R12 -- and we've successfully met the requirements of the business and closed the books on time every single quarter. But there's always room for improvement and that comes in the form of Fusion Applications. We are now live on Fusion Accounting Hub (FAH), which is the first critical step in moving to a full Fusion Financials instance. We started with FAH so that we could design a global chart of accounts. Eventually, every transaction in every country will originate from this global chart of accounts -- it becomes the structure for managing our business more uniformly. In conjunction, we're using Oracle Hyperion Data Relationship Management (DRM) to centralize and automate governance of our global chart of accounts and related hierarchies, which will help us lower our costs and greatly reduce risk. Each month, we have to consolidate data from our primary general ledgers. We have been able to simplify this process considerably using FAH. We can now submit our primary ledgers running in E-Business Suite (EBS) R12 directly to FAH, eliminating the need for more than 90 redundant consolidation ledgers. Also we can submit incrementally, so if we need to book an adjustment in a primary ledger after close, we can do so without re-opening it and re-submitting. As a result, we have earlier visibility to period-end actuals during the close. A goal of this implementation, and one that we successfully achieved, is that we are able to use FAH globally with no customization. This means we have the ability to fully deploy ledger sets at the consolidation level, plus we can use standard functionality for currency translation and mass allocations. We're able to use account monitoring and drill down functionality from the consolidation level all the way through to EBS primary ledgers and sub-ledgers, which allows someone to click through a transaction appearing at the consolidation level clear through to its original source, a significant productivity enhancement when doing research. We also see a significant improvement in reporting using Essbase cube and Hyperion Smart View. Specifically, "the addition of an Essbase cube on top of the GL gives us tremendous versatility to automate and speed our elimination process," says Claire Sebti, Senior Director of Corporate Accounting at Oracle. A highlight of this story is that FAH is running in a co-existence environment. Our plan is to move to Fusion Financials in steps, starting with FAH. Next, our Oracle Financial Services Software subsidiary will move to a full Fusion Financials instance. Then we'll replace our EBS instance with Fusion Financials. This approach allows us to plan in steps, learn as we go, and not overwhelm our teams. It also reduces the risk that comes with moving the entire instance at once. Maria Smith, Vice President of Global Controller Operations, is confident about how they've positioned themselves to uptake more Fusion functionality and is eager to "continue to drive additional efficiency and cost savings." In this story, the happy customers are Oracle controllers, financial analysts, accounting specialists, and our management team that get earlier access to more flexible reporting. "Fusion Accounting Hub simplifies our processes and gives us more transparency into account activity," raves Alex SanJuan, Senior Director, Record to Report Strategic Process Owner. Overall, the team has been very impressed with the usability and functionality of FAH and are pleased with the quantifiable improvements. Claire Sebti states, "Our WD5 close activities have been reduced by at least four hours of system processing time, just for the consolidation group." Fusion Accounting Hub is an inspiring beginning to our Fusion Financials implementation story. There's no doubt it's going to be an international bestseller! Corey West, Senior Vice President Oracle's Corporate Controller and Chief Accounting Officer

    Read the article

  • Why am I getting this SVN can't move .svn/tmp/x to trunk/x error?

    - by Alex Waters
    I am trying to checkout into the virtualbox shared folder with svn 1.7 in ubuntu 12.04 running as a guest on a windows 7 host. I had read that this error was a 1.6 problem, and updated - but am still receiving the error: svn: E000071: Can't move '/mnt/hostShare/code/www/.svn/tmp/svn-hsOG5X' to '/mnt/hostShare/code/www/trunk/statement.aspx?d=201108': Protocol error I found this blog post about the same error in a mac environment, but am finding that changing the folder/file permissions does nothing. vim .svn/entires just has the number 12 - does this need to be changed? Thank you for any assistance! (just another reason for why I prefer git...)

    Read the article

  • Running ASP.Net MVC3 Alongside ASP.Net WebForms in the Same Project

    - by Sam Abraham
    I previously blogged on running ASP.Net MVC in an ASP.Net WebForms project. My reference at the time was a freely-available PDF document by Scott Guthrie which covered the setup process in good detail.   As I am preparing references to share with our audience at my upcoming talk at the Deerfield Beach Coders Café on March 1st (http://www.fladotnet.com/Reg.aspx?EventID=514), I found a nice blog post by Scott Hanselman on running both ASP.Net 4.0 WebForms along with ASP.Net MVC 3.0 in the same project. You can access this article here.   Moreover, Scott later followed-up with a blog showing how to leverage NuGet to automate the setup of ASP.Net MVC3 in an existing ASP.Net WebForms project.   One frequent question that usually comes up when discussing this side-by-side setup is the loss of the convenient Visual Studio Solution Explorer context menu which enable us to easily create controllers and views with a few mouse clicks.   A good suggestion brought up in the comments section of Scott’s article presented a good work-around to this problem: Manually add the MVC Visual Studio Project Type GUID in your .sln solution file ({E53F8FEA-EAE0-44A6-8774-FFD645390401}) which then brings back the MVC-specific context menu functionality in solution explorer of the hybrid project. (Thank James Raden!)

    Read the article

  • #OOW 2012: Big Data and The Social Revolution

    - by Eric Bezille
    As what was saying Cognizant CSO Malcolm Frank about the "Futur of Work", and how the Business should prepare in the face of the new generation  not only of devices and "internet of things" but also due to their users ("The Millennials"), moving from "consumers" to "prosumers" :  we are at a turning point today which is bringing us to the next IT Architecture Wave. So this is no more just about putting Big Data, Social Networks and Customer Experience (CxM) on top of old existing processes, it is about embracing the next curve, by identifying what processes need to be improve, but also and more importantly what processes are obsolete and need to be get ride of, and new processes put in place. It is about managing both the hierarchical and structured Enterprise and its social connections and influencers inside and outside of the Enterprise. And this does apply everywhere, up to the Utilities and Smart Grids, where it is no more just about delivering (faster) the same old 300 reports that have grown over time with those new technologies but to understand what need to be looked at, in real-time, down to an hand full relevant reports with the KPI relevant to the business. It is about how IT can anticipate the next wave, and is able to answers Business questions, and give those capabilities in real-time right at the hand of the decision makers... This is the turning curve, where IT is really moving from the past decade "Cost Center" to "Value for the Business", as Corporate Stakeholders will be able to touch the value directly at the tip of their fingers. It is all about making Data Driven Strategic decisions, encompassed and enriched by ALL the Data, and connected to customers/prosumers influencers. This brings to stakeholders the ability to make informed decisions on question like : “What would be the best Olympic Gold winner to represent my automotive brand ?”... in a few clicks and in real-time, based on social media analysis (twitter, Facebook, Google+...) and connections link to my Enterprise data. A true example demonstrated by Larry Ellison in real-time during his yesterday’s key notes, where “Hardware and Software Engineered to Work Together” is not only about extreme performances but also solutions that Business can touch thanks to well integrated Customer eXperience Management and Social Networking : bringing the capabilities to IT to move to the IT Architecture Next wave. An example, illustrated also todays in 2 others sessions, that I had the opportunity to attend. The first session bringing the “Internet of Things” in Oil&Gaz into actionable decisions thanks to Complex Event Processing capturing sensors data with the ready to run IT infrastructure leveraging Exalogic for the CEP side, Exadata for the enrich datasets and Exalytics to provide the informed decision interface up to end-user. The second session showing Real Time Decision engine in action for ACCOR hotels, with Eric Wyttynck, VP eCommerce, and his Technical Director Pascal Massenet. I have to close my post here, as I have to go to run our practical hands-on lab, cooked with Olivier Canonge, Christophe Pauliat and Simon Coter, illustrating in practice the Oracle Infrastructure Private Cloud recently announced last Sunday by Larry, and developed through many examples this morning by John Folwer. John also announced today Solaris 11.1 with a range of network innovation and virtualization at the OS level, as well as many optimizations for applications, like for Oracle RAC, with the introduction of the lock manager inside Solaris Kernel. Last but not least, he introduced Xsigo Datacenter Fabric for highly simplified networks and storage virtualization for your Cloud Infrastructure. Hoping you will get ready to jump on the next wave, we are here to help...

    Read the article

  • EF4 CPT5 Code First Remove Cascading Deletes

    - by Dane Morgridge
    I have been using EF4 CTP5 with code first and I really like the new code.  One issue I was having however, was cascading deletes is on by default.  This may come as a surprise as using Entity Framework with anything but code first, this is not the case.  I ran into an exception with some one-to-many relationships I had: Introducing FOREIGN KEY constraint 'ProjectAuthorization_UserProfile' on table 'ProjectAuthorizations' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. See previous errors. To get around this, you can use the fluent API and put some code in the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Entity<UserProfile>() 4: .HasMany(u => u.ProjectAuthorizations) 5: .WithRequired(a => a.UserProfile) 6: .WillCascadeOnDelete(false); 7: } This will work to remove the cascading delete, but I have to use the fluent API and it has to be done for every one-to-many relationship that causes the problem. I am personally not a fan of cascading deletes in general (for several reasons) and I’m not a huge fan of fluent APIs.  However, there is a way to do this without using the fluent API.  You can in the OnModelCreating, remove the convention that creates the cascading deletes altogether. 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>(); 4: } Thanks to Jeff Derstadt from Microsoft for the info on removing the convention all together.  There is a way to build a custom attribute to remove it on a case by case basis and I’ll have a post on how to do this in the near future.

    Read the article

  • Getting codebaseHQ SVN ChangeLog data in your application

    - by saifkhan
    I deploy apps via ClickOnce. After each deployment we have to review the changes made and send out an email to the users with the changes. What I decided now to do is to use CodebaseHQ’s API to access a project’s SVN repository and display the commit notes so some users who download new updates can check what was changed or updated in an app. This saves a heck of a lot of time, especially when your apps are in beta and you are making several changes daily based on feedback. You can read up on their API here Here is a sample on how to access the Repositories API from a windows app Public Sub GetLog() If String.IsNullOrEmpty(_url) Then Exit Sub Dim answer As String = String.Empty Dim myReq As HttpWebRequest = WebRequest.Create(_url) With myReq .Headers.Add("Authorization", String.Format("Basic {0}", Convert.ToBase64String(Encoding.ASCII.GetBytes("username:password")))) .ContentType = "application/xml" .Accept = "application/xml" .Method = "POST" End With Try Using response As HttpWebResponse = myReq.GetResponse() Using sr As New System.IO.StreamReader(response.GetResponseStream()) answer = sr.ReadToEnd() Dim doc As XDocument = XDocument.Parse(answer) Dim commits = From commit In doc.Descendants("commit") _ Select Message = commit.Element("message").Value, _ AuthorName = commit.Element("author-name").Value, _ AuthoredDate = DateTime.Parse(commit.Element("authored-at").Value).Date grdLogData.BeginUpdate() grdLogData.DataSource = commits.ToList() grdLogData.EndUpdate() End Using End Using Catch ex As Exception MsgBox(ex.Message) End Try End Sub

    Read the article

  • Missing z-axis rotation for transforming between two vectors

    - by Steve Baughman
    I'm trying to rotate a cube so that it's facing up, but am getting hung up on the final implementation details. It now reliably will rotate the x,y axis to the correct side, but the z-axis is never rotating (See photos of before and after rotation). When I'm using the code below I always get '0' for my rotationVector.z. What am I missing here? // Define lookAt vector lookAtVector = GLKVector3Make(0,0,1); // Define axes vectors axes[0] = GLKVector3Make(0,0,1); axes[1] = GLKVector3Make(-1,0,0); axes[2] = GLKVector3Make(0,1,0); axes[3] = GLKVector3Make(1,0,0); axes[4] = GLKVector3Make(0,-1,0); axes[5] = GLKVector3Make(0,0,-1); CGFloat highest_dot = -1.0; GLKVector3 closest_axis; for(int i = 0; i < 6; i++) { // multiply cube's axes by existing matrix GLKVector3 axis = GLKMatrix4MultiplyVector3(matrix, axes[i]); CGFloat dot = GLKVector3DotProduct(axis, lookAtVector); if(dot > highest_dot) { closest_axis = axis; highest_dot = dot; } } GLKVector3 rotationVector = GLKVector3CrossProduct(closest_axis, lookAtVector); // Get angle between vectors CGFloat angle = atan2(GLKVector3Length(rotationVector), GLKVector3DotProduct(closest_axis, lookAtVector)); // normalize the rotation vector rotationVector = GLKVector3Normalize(rotationVector); // Create transform CATransform3D rotationTransform = CATransform3DMakeRotation(angle, rotationVector.x, rotationVector.y, rotationVector.z); // add rotation transform to existing transformation baseTransform = CATransform3DConcat(baseTransform, rotationTransform); return baseTransform; Before 3d Rotation After 3d Rotation Implementation based on this post

    Read the article

  • HPCM 11.1.2.2.x - HPCM Standard Costing Generating >99 Calc Scipts

    - by Jane Story
    HPCM Standard Profitability calculation scripts are named based on a documented naming convention. From 11.1.2.2.x, the script name = a script suffix (1 letter) + POV identifier (3 digits) + Stage Order Number (1 digit) + “_” + index (2 digits) (please see documentation for more information (http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_admin/apes01.html). This naming convention results in the name being 8 characters in length i.e. the maximum number of characters permitted calculation script names in non-unicode Essbase BSO databases. The index in the name will indicate the number of scripts per stage. In the vast majority of cases, the number of scripts generated per stage will be significantly less than 100 and therefore, there will be no issue. However, in some cases, the number of scripts generated can exceed 99. It is unusual for an application to generate more than 99 calculation scripts for one stage. This may indicate that explicit assignments are being extensively used. An assessment should be made of the design to see if assignment rules can be used instead. Assignment rules will reduce the need for so many calculation script lines which will reduce the requirement for such a large number of calculation scripts. In cases where the scripts generates exceeds 100, the length of the name of the 100th calculation script is different from the 99th as the calculation script name changes from being 8 characters long and becomes 9 characters long (e.g. A6811_100 rather than A6811_99). A name of 9 characters is not permitted in non Unicode applications. It is “too long”. When this occurs, an error will show in the hpcm.log as “Error processing calculation scripts” and “Unexpected error in business logic “. Further down the log, it is possible to see that this is “Caused by: Error copying object “ and “Caused by: com.essbase.api.base.EssException: Cannot put olap file object ... object name_[<calc script name> e.g. A6811_100] too long for non-unicode mode application”. The error file will give the name of the calculation script which is causing the issue. In my example, this is A6811_100 and you can see this is 9 characters in length. It is not possible to increase the number of characters allowed in a calculation script name. However, it is possible to increase the size of each calculation script. The default for an HPCM application, set in the preferences, is set to 4mb. If the size of each calculation script is larger, the number of scripts generated will reduce and, therefore, less than 100 scripts will be generated which means that the name of the calculation script will remain 8 characters long. To increase the size of the generated calculation scripts for an application, in the HPM_APPLICATION_PREFERENCE table for the application, find the row where HPM_PREFERENCE_NAME_ID=20. The default value in this row is 4194304. This can be increased e.g. 7340032 will increase this to 7mb. Please restart the profitability service after making the change.

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • links for 2010-06-01

    - by Bob Rhubart
    Venkatakrishnan J: Oracle BI EE 10.1.3.4.1 -- Do we need measures in a Fact Table? Troubleshooting from Rittman Mead's Venkatakrishnan J. (tags: oracle otn businessintelligence datawarehouse) Grid container support : JavaFX Composer An overview how JavaFX Composer supports the grid container. (tags: oracle sun javafx) John Brunswick: Site Studio Mobile Example - WCM Reuse The example highlighted in John Brunswick's post takes advantage of dynamic conversion capabilities in Oracle UCM that allow site content to be created and updated via MS Office documents.  (tags: oracle otn enterprise2.0) @glassfish: GlassFish 3 in the EC2 Cloud powering Dutch and Belgian community polls "The infrastructure is Amazon's Elastic Cloud Computing (EC2) environment because of the dynamic provisioning (elasticity) required by such an online service. Requests are handled directly by the grizzly layer of GlassFish with no extra front-end HTTP layer and shows great performance and scalability." -- The Aquarium (tags: oracle java sun glassfish cloud) James Morle: Flash Storage Will Be Cheap: The End of the World is Nigh "We now need technologies that look more like Oracle Exadata v2, with low-latency RDMA interfaces directly into the Operating System/Database. However, they need to easily and natively support other types of storage (unstructured data such as files, VMware datastores and so forth). The Exadata architecture lends itself well to changes in this area in both hardware trends and access protocols." -- James Morle (tags: oracle otn exadata database architecture virtualization) Java / Oracle SOA blog: HTTP binding in Soa Suite 11g PS2 (tags: ping.fm) Confessions of a Software Developer: Some Tips for Installing Oracle BPM 11g on Windows XP (tags: ping.fm) SOA and Java using Oracle technology: Book review: Oracle Coherence 3.5: Create internet scale applications using Oracle's high-performance data grid (tags: ping.fm)

    Read the article

  • A good tool for browser automation/client-side Web scripting

    - by hardmath
    I'm interested in adopting a tool/scripting language to automate some daily tasks connected with fighting forum spammers. A brief overview of these tasks: analyze new registrations and posts on a phpBB forum, and delete or deactivate spammers using a website/community that collects such spam reports. Typically such automation is integrated into the phpBB installation itself, which certainly has its advantages. My approach has the advantage of independent operation, etc. One way to think about this is in terms of browser automation. I've used iOpus iMacros for Firefox (the free version) in the past to respond to individual spammers, but current attacks are highly distributed. My "logic" for pigeonholing spammers vs. nonspammers seems beyond the easy reach of the free version of iMacros. From a more technical perspective one can think about dispensing with the browser altogether and programming GET/POST requests directed to my forum and other Web-based resources. I'm familiar with some scripting languages like Ruby and Lua, but I could be persuaded that a compiled application is better suited for these tasks. However in my experience the dynamic flexibility of interpreted environments is very useful in prototyping and debugging the application logic. So I'm leaning in the direction of scripting languages. Among browsers I favor Firefox and Chrome. I use both Windows and Linux platforms, and if the tool can adapt to an Android platform, it would make a neat demonstration of skills, yes? Thanks in advance for your suggestions!

    Read the article

  • SSIS Dashboard v0.4

    - by Davide Mauri
    Following the post on SSISDB script on Gist, I’ve been working on a HTML5 SSIS Dashboard, in order to have a nice looking, user friendly and, most of all, useful, SSIS Dashboard. Since this is a “spare-time” project, I’ve decided to develop it using Python since it’s THE data language (R aside), it’s a beautiful & powerful, well established and well documented and with a rich ecosystem around. Plus it has full support in Visual Studio, through the amazing Python Tools For Visual Studio plugin, I decided also to use Flask, a very good micro-framework to create websites, and use the SB Admin 2.0 Bootstrap admin template, since I’m all but a Web Designer. The result is here: https://github.com/yorek/ssis-dashboard and I can say I’m pretty satisfied with the work done so far (I’ve worked on it for probably less than 24 hours). Though there’s some features I’d like to add in t future (historical execution time, some charts, connection with AzureML to do prediction on expected execution times) it’s already usable. Of course I’ve tested it only on my development machine, so check twice before putting it in production but, give the fact that, virtually, there is not installation needed (you only need to install Python), and that all queries are based on standard SSISDB objects, I expect no big problems. If you want to test, contribute and/or give feedback please fell free to do it…I would really love to see this little project become a community project! Enjoy!

    Read the article

  • What to look for in a good cheap color laser printer?

    - by torbengb
    My old color inkjet is giving up, and I'm considering laser to replace it. There are several good questions about color laser printers, but none of them summarize the pro's and con's. So here goes: I am looking to buy a color printer for home use, mostly for photos (at least medium-quality) and also for low-volume b/w text. Duplex would be neat but not a must. One aspect per answer, please: What aspects should I consider, what should I look for, what should I avoid in a home color laser printer? I'll make this a community wiki because there won't be one single definite answer. I'll post a few ideas of my own but I'm hoping to get many useful insights.

    Read the article

  • LASTDATE dates arguments and upcoming events #dax #tabular #powerpivot

    - by Marco Russo (SQLBI)
    Recently I had to write a DAX formula containing a LASTDATE within the logical condition of a FILTER: I found that its behavior was not the one I expected and I further investigated. At the end, I wrote my findings in this article on SQLBI, which can be applied to any Time Intelligence function with a <dates> argument.The key point is that when you write LASTDATE( table[column] )in reality you obtain something like LASTDATE( CALCULATETABLE( VALUES( table[column] ) ) )which converts an existing row context into a filter context.Thus, if you have something like FILTER( table, table[column] = LASTDATE( table[column] ) the FILTER will return all the rows of table, whereas you probably want to use FILTER( table, table[column] = LASTDATE( VALUES( table[column] ) ) )so that the existing filter context before executing FILTER is used to get the result from VALUES( table[column] ), avoiding the automatic expansion that would include a CALCULATETABLE that would hide the existing filter context.If after reading the article you want to get more insights, read the Jeffrey Wang's post here.In these days I'm speaking at SQLRally Nordic 2012 in Copenhagen and I will be in Cologne (Germany) next week for a SSAS Tabular Workshop, whereas Alberto will teach the same workshop in Amsterdam one week later. Both workshops still have seats available and the Amsterdam's one is still in early bird discount until October 3rd!Then, in November I expect to meet many blog readers at PASS Summit 2012 in Seattle and I hope to find the time to write other article on interesting things on Tabular and PowerPivot. Stay tuned!

    Read the article

  • How can I rank teams based off of head to head wins/losses

    - by TMP
    I'm trying to write an algorithm (specifically in Ruby) that will rank teams based on their record against each other. If a team A and team B have won the same amount of games against each other, then it goes down to point differentials. Here's an example: A beat B two times B beats C one time A beats D three times C bests D two times D beats C one time B beats A one time Which sort of reduces to A[B] = 2 B[C] = 1 A[D] = 3 C[D] = 2 D[C] = 1 B[A] = 1 Which sort of reduces to A[B] = 1 B[C] = 1 A[D] = 3 C[D] = 1 D[C] = -1 B[A] = -1 Which is about how far I've got I think the results of this specific algorithm would be: A, B, C, D But I'm stuck on how to transition from my nested hash-like structure to the results. My psuedo-code is as follows (I can post my ruby code too if someone wants): For each game(g): hash[g.winner][g.loser] += 1 That leaves hash as the first reduction above hash2 = clone of hash For each key(winner), value(losers hash) in hash: For each key(loser), value(losses against winner): hash2[loser][winner] -= losses Which leaves hash2 as the second reduction Feel free to as me question or edit this to be more clear, I'm not sure of how to put it in a very eloquent way. Thanks!

    Read the article

  • Guide.BeginShowMessageBox wrapper

    - by Daniel Moth
    While coding for Windows Phone 7 using Silverlight, I was really disappointed with the built-in MessageBox class, so I found an alternative. My disappointment was the fact that: Display of the messagebox causes the phone to vibrate (!) Display of the messagebox causes the phone to make an annoying sound. You can only have "ok" and "cancel" buttons (no other button captions). I was using the messagebox something like this: // Produces unwanted sound and vibration. // ...plus no customization of button captions. if (MessageBox.Show("my message", "my caption", MessageBoxButton.OKCancel) == MessageBoxResult.OK) { // Do something Debug.WriteLine("OK"); } …and wanted to make minimal changes throughout my code to change it to this: // no sound or vibration // ...plus bonus of customizing button captions if (MyMessageBox.Show("my message", "my caption", "ok, got it", "that sucks") == MyMessageBoxResult.Button1) { // Do something Debug.WriteLine("OK"); } It turns out there is a much more powerful class in the XNA framework that delivered on my requirements (and offers even more features that I didn't need like choice of sounds and not blocking the caller): Guide.BeginShowMessageBox. You can use it simply by adding an assembly reference to Microsoft.Xna.Framework.GamerServices. I wrote a little wrapper for my needs and you can find it here (ready to enhance with your needs): MyMessageBox.cs.txt. Comments about this post welcome at the original blog.

    Read the article

  • Retrieve the full ASP.NET Form Buffer as a String

    - by Rick Strahl
    Did it again today: For logging purposes I needed to capture the full Request.Form data as a string and while it’s pretty easy to retrieve the buffer it always takes me a few minutes to remember how to do it. So I finally wrote a small helper function to accomplish this since this comes up rather frequently especially in debugging scenarios or in the immediate window. Here’s the quick function to get the form buffer as string: /// <summary> /// Returns the content of the POST buffer as string /// </summary> /// <returns></returns> public static string FormBufferToString() { HttpRequest Request = HttpContext.Current.Request; if (Request.TotalBytes > 0) return Encoding.Default.GetString(Request.BinaryRead(Request.TotalBytes)); return string.Empty; } Clearly a simple task, but handy to have in your library for reuse. You probably don’t want to call this if you have a massive inbound form buffer, or if the data you’re retrieving is binary. It’s probably a good idea to check the inbound content type before calling this function with something like this: var formBuffer = string.Empty; if (Request.ContentType.StartsWith("text/") || Request.ContentType == "application/x-www-form-urlencoded") ) { formBuffer = FormBufferToString(); } to ensure you’re working only on content types you can actually view as text. Now if I can only remember the name of this function in my library – it’s part of the static WebUtils class in the West Wind Web Toolkit if you want to check out a number of other useful Web helper functions.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  

    Read the article

  • SQL SERVER – What is the Maximum Relational Database Size Supported by Single Instance?

    - by Pinal Dave
    I often get asked following question? “How much data SQL Server can handle?” Every single time when I get this question – I ask back following question - “How much data your storage system can handle?” The reason I ask this question back is because in reality for enterprise systems the limitation of storage is no more an issue. The Matter of the fact most of the database is now a days limited by the size of the storage system. SQL Server is enterprise system and it is very mature product. Even though if you still want to know what is the actual limit here is the answer. SQL Server 2008R2, 2012 and 2014 have maximum capacity of 524 PB (Petabyte) in the Enterprise, BI and Standard edition. SQL Server Express has a limitation of 10 GB due to its nature. I guess, now when you look at my question it will make sense that it is all depending on the size of your storage system. I personally believe at this point of time 524 PB is quite a huge data, but we never know after 10 years when we read this blog post, we all may think what was I thinking actually. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Links for Getting Started with PowerShell for Office 365 and Exchange Online

    - by Brian Jackett
    This past week I worked with some customers who were getting started with using PowerShell against Exchange Online as part of their new Office 365 solution.  As you may know Exchange is not my primary focus area but since these customers’ needs centered around PowerShell I thought this would be a good opportunity to learn more.  What soon became apparent to me was a few things: The output / objects returned from Exchange Online vs. on-premises commandlets sometimes differ (mainly due to Exchange Online output needing to be serialized across the wire) Some of the community scripts posted on TechNet Script Center or PoSH Code Repository that work for on-premises won’t work against Exchange Online due to the above I went to multiple resources to get an introduction of using the Exchange Online commandlets      In light of the last item I would like to share some resources I gathered for getting started with the Exchange Online commandlets.  I will address the first two items in a follow up post that shows one sample script that I helped a customer fix.   Links Using PowerShell with Office365 http://blah.winsmarts.com/2011-4-Using_PowerShell_with_Office365.aspx   Administering Microsoft Office 365 using WIndows PowerShell http://blog.powershell.no/2011/05/09/administering-microsoft-office-365-using-windows-powershell/   Reference to Available PowerShell Cmdlets in Exchange Online http://help.outlook.com/en-us/140/dd575549.aspx   Windows PowerShell cmdlets for Office 365 http://onlinehelp.microsoft.com/en-us/office365-enterprises/hh125002.aspx   Role Based Access Control in Exchange Online http://help.outlook.com/en-us/140/dd207274.aspx   Exchange Online and RBAC http://blogs.technet.com/b/ilvancri/archive/2011/05/16/exchange-online-office365-and-rbac.aspx   Conclusion    Office 365 is being integrated into more and more customers’ environments.  While your PowerShell skills can still be used to manage certain portions of Office 365 (Exchange Online as of the time of this writing) there are a few differences in how data is passed back and forth.  Hopefully the links above will get you started on scripting against  cloud based services.         -Frog Out

    Read the article

  • Reflector Pro has now been released!

    - by CliveT
    After moving into the .NET division in May , and having a great time working on Reflector, I'm pleased to say that the results of that work are now available. Reflector Pro has now been released! The old Reflector as you know and love it is still available free of charge, and as part of this project we've fixed a number of bugs in the de-compilation that have been around for a long time. The Pro version comes as an add-in for Visual Studio - this offers dynamic de-compilation and generation of pdb files which allow you to step into the de-compiled code. Alex has some good pictures of this functionality on his beta post from around a month ago. Thanks to the other guys who've worked on this for taking me along for the ride - Alex, Andrew, Bart and Jason. Stephen did some great usability work, Chris Alford did some great technical authoring and Laila handled the launch publicity. Like all projects, there's always more I'd like to have done, but what we have looks like a pretty powerful addition to the developer's set of tools to me. Please try it and give us feedback on the forum.

    Read the article

  • How to safely backup MySQL using VSS-based backup solutions

    - by Rhyven
    One of my clients is running MySQL on a Windows Server 2008 system. Their regular backups are performed using StorageCraft's ShadowCopy, which uses the VSS service to perform backups of open files. Some investigation indicates that MySQL is not entirely VSS-aware, and that the tables need to be locked prior to the shadow operation, then unlocked afterwards. There is a post at http://forum.storagecraft.com/Community/forums/p/548/2702.aspx which indicates the steps that need to be performed, however the user had some difficulty in performing them and no follow up solution was ever posted. Specifically, they succeeded in writing a batch file to lock the database, however once the batch file returns from MySQL it drops the connection and thus relinquishes the lock. I'm looking for a method that I can send the MySQL command FLUSH TABLES WITH READ LOCK, then perform the backup, then send UNLOCK TABLES when the backup is complete. Alternatively, I can exclude the MySQL data storage folder from backup, and schedule a mysqldump backup into a folder that will then be backed up by VSS. Can I have some recommendations please?

    Read the article

  • apt-get upgrade gives "403 forbidden" error

    - by 3l4ng
    I'm running Ubuntu 13.04 64b. sudo apt-get update works fine, but when I run sudo apt-get upgrade I get these errors: Err http://archive.ubuntu.com/ubuntu/ raring-updates/main python3.3-minimal amd64 3.3.1-1ubuntu5.2 403 Forbidden Err http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/ raring/main gimp amd64 2.8.6-0raring1~ppa 403 Forbidden Err http://archive.ubuntu.com/ubuntu/ raring-security/main python3.3-minimal amd64 3.3.1-1ubuntu5.2 403 Forbidden Err http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/ raring/main gimp-help-en all 1:2.8-0raring16~ppa 403 Forbidden Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/python3.3/python3.3-minimal_3.3.1-1ubuntu5.2_amd64.deb 403 Forbidden Failed to fetch http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/pool/main/g/gimp/gimp_2.8.6-0raring1~ppa_amd64.deb 403 Forbidden Failed to fetch http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/pool/main/g/gimp-help/gimp-help-en_2.8-0raring16~ppa_all.deb 403 Forbidden E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Running sudo apt-get upgrade --fix-missing installs some updates, but the above errors still persist when I run apt-get upgrade again. The software update app shows the error: https://www.dropbox.com/s/2cr450557hmahzz/software_update.jpg and selecting continue shows: https://www.dropbox.com/s/l7u32sxyfbxxeeg/soft_upd2.jpg (sorry for the links, I don't have enough rep to post images) I am behind a proxy, but apt-get update and web browsing work without issues. I also do not believe a server being down is causing this, as the problem has been there over a month. Any ideas on how to fix this?

    Read the article

  • How do I convince my team that a requirements specification is unnecessary if we adopt user-stories?

    - by Nupul
    We are planning to adopt user-stories to capture stakeholder 'intent' in a lightweight fashion rather than a heavy SRS (software requirements specifications). However, it seems that though they understand the value of stories, there is still a desire to 'convert' the stories into an SRS-like language with all the attributes, priorities, input, outputs, source, destination etc. User-stories 'eliminate' the need for a formal SRS like artifact to begin with so what's the point in having an SRS? How should I convince my team (who are all very qualified CS folks by the way - both by education and practice) that the SRS would be 'eliminated' if we adopted user-stories for capturing the functional requirements of the system? (NFRs etc can be captured too, but that's not the intent of the question). So here's my 'work-flow' argument: Capture initial requirements as user-stories and later elaborate them to use-cases (which are required to be documented at a low level i.e. describing interactions with the UI prototypes/mockups and are a deliverable post deployment). Thus going from user-stories to use-cases rather than user-stories to SRS to use-cases. How are you all currently capturing user-stories at your workplace (if at all) and how do you suggest I 'make a case' for absence of SRS in presence of user-stories?

    Read the article

  • Welcome to the Oracle FedApps blog

    - by jeffrey.waterman
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Congratulations, you have stumbled upon Oracle’s newest blog: The Federal Applications Blog. Periodically I plan to provide some insight into how Oracle’s application solutions are being applied, or how they can be applied, within the Federal Government. If you are a user of, or just interested in, Oracle’s applications in the Federal space and have questions/topics you would like to see addressed in this blog, please post a comment. So bear with me as I take a bit of time to refine the content, look and feel of this blog. http://www.oracle.com/us/industries/public-sector/038044.htm http://www.oracle.com/us/industries/public-sector/038046.htm -- JMW

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >