Search Results

Search found 19690 results on 788 pages for 'result partitioning'.

Page 81/788 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • HP ProLiant DL980-Oracle TPC-C Benchmark spat

    - by jchang
    The Register reported a spat between HP and Oracle on the TPC-C benchmark. Per above, HP submitted a TPC-C result of 3,388,535 tpm-C for their ProLiant DL980 G7 (8 Xeon X7560 processors), with a cost of $0.63 per tpm-C. Oracle has refused permission to publish. Late last year (2010) Oracle published a result of 30M tpm-C for a 108 processors (sockets) SPARC cluster ($30M complete system cost). Oracle is now comparing this to the HP Superdome result from 2007 of 4M tpm-C at $2.93 per tpm-C, calling...(read more)

    Read the article

  • Problem with script substitution when running script

    - by tucaz
    Hi! I'm new to Linux so this probably should be an easy fix, but I cannot see it. I have a script downloaded from official sources that is used to install additional tools for fsharp but it gives me a syntax error when running it. I tried to replace ( and ) by { and } but eventually it lead me to another error so I think this is not the problem since the script works for everybody. I read some articles that say that my bash version maybe is not the right one. I'm using Ubuntu 10.10 and here is the error: install-bonus.sh: 28: Syntax error: "(" unexpected (expecting "}") And this is line 27, 28 and 29: { declare -a DIRS=("${!3}") FILE=$2 And the full script: #! /bin/sh -e PREFIX=/usr BIN=$PREFIX/bin MAN=$PREFIX/share/man/man1/ die() { echo "$1" &2 echo "Installation aborted." &2 exit 1 } echo "This script will install additional material for F# including" echo "man pages, fsharpc and fsharpi scripts and Gtk# support for F#" echo "Interactive (root access needed)" echo "" # ------------------------------------------------------------------------------ # Utility function that searches specified directories for a specified file # and if the file is not found, it asks user to provide a directory RESULT="" searchpaths() { declare -a DIRS=("${!3}") FILE=$2 DIR=${DIRS[0]} for TRYDIR in ${DIRS[@]} do if [ -f $TRYDIR/$FILE ] then DIR=$TRYDIR fi done while [ ! -f $DIR/$FILE ] do echo "File '$FILE' was not found in any of ${DIRS[@]}. Please enter $1 installation directory:" read DIR done RESULT=$DIR } # ------------------------------------------------------------------------------ # Locate F# installation directory - this is needed, because we want to # add environment variable with it, generate 'fsharpc' and 'fsharpi' and also # copy load-gtk.fsx to that directory # ------------------------------------------------------------------------------ PATHS=( $1 /usr/lib/fsharp /usr/lib/shared/fsharp ) searchpaths "F# installation" FSharp.Core.dll PATHS[@] FSHARPDIR=$RESULT echo "Successfully found F# installation directory." # ------------------------------------------------------------------------------ # Check that we have everything we need # ------------------------------------------------------------------------------ [ $(id -u) -eq 0 ] || die "Please run the script as root." which mono /dev/null || die "mono not found in PATH." # ------------------------------------------------------------------------------ # Make sure that all additional assemblies are in GAC # ------------------------------------------------------------------------------ echo "Installing additional F# assemblies to the GAC" gacutil -i $FSHARPDIR/FSharp.Build.dll gacutil -i $FSHARPDIR/FSharp.Compiler.dll gacutil -i $FSHARPDIR/FSharp.Compiler.Interactive.Settings.dll gacutil -i $FSHARPDIR/FSharp.Compiler.Server.Shared.dll # ------------------------------------------------------------------------------ # Install additional files # ------------------------------------------------------------------------------ # Install man pages echo "Installing additional F# commands, scripts and man pages" mkdir -p $MAN cp *.1 $MAN # Export the FSHARP_COMPILER_BIN environment variable if [[ ! "$OSTYPE" =~ "darwin" ]]; then echo "export FSHARP_COMPILER_BIN=$FSHARPDIR" fsharp.sh mv fsharp.sh /etc/profile.d/ fi # Generate 'load-gtk.fsx' script for F# Interactive (ask user if we cannot find binaries) PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/gtk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Gtk#" gtk-sharp.dll PATHS[@] GTKDIR=$RESULT echo "Successfully found Gtk# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/glib-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Glib" glib-sharp.dll PATHS[@] GLIBDIR=$RESULT echo "Successfully found Glib# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/atk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Atk#" atk-sharp.dll PATHS[@] ATKDIR=$RESULT echo "Successfully found Atk# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/gdk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Gdk#" gdk-sharp.dll PATHS[@] GDKDIR=$RESULT echo "Successfully found Gdk# root directory." cp bonus/load-gtk.fsx load-gtk1.fsx sed "s,INSERTGTKPATH,$GTKDIR,g" load-gtk1.fsx load-gtk2.fsx sed "s,INSERTGDKPATH,$GDKDIR,g" load-gtk2.fsx load-gtk3.fsx sed "s,INSERTATKPATH,$ATKDIR,g" load-gtk3.fsx load-gtk4.fsx sed "s,INSERTGLIBPATH,$GLIBDIR,g" load-gtk4.fsx load-gtk.fsx rm load-gtk1.fsx rm load-gtk2.fsx rm load-gtk3.fsx rm load-gtk4.fsx mv load-gtk.fsx $FSHARPDIR/load-gtk.fsx # Generate 'fsharpc' and 'fsharpi' scripts (using the F# path) # 'fsharpi' automatically searches F# root directory (e.g. load-gtk) echo "#!/bin/sh" fsharpc echo "exec mono $FSHARPDIR/fsc.exe --resident \"\$@\"" fsharpc chmod 755 fsharpc echo "#!/bin/sh" fsharpi echo "exec mono $FSHARPDIR/fsi.exe -I:\"$FSHARPDIR\" \"\$@\"" fsharpi chmod 755 fsharpi mv fsharpc $BIN/fsharpc mv fsharpi $BIN/fsharpi Thanks a lot!

    Read the article

  • Creating ASP.NET MVC Negotiated Content Results

    - by Rick Strahl
    In a recent ASP.NET MVC application I’m involved with, we had a late in the process request to handle Content Negotiation: Returning output based on the HTTP Accept header of the incoming HTTP request. This is standard behavior in ASP.NET Web API but ASP.NET MVC doesn’t support this functionality directly out of the box. Another reason this came up in discussion is last week’s announcements of ASP.NET vNext, which seems to indicate that ASP.NET Web API is not going to be ported to the cloud version of vNext, but rather be replaced by a combined version of MVC and Web API. While it’s not clear what new API features will show up in this new framework, it’s pretty clear that the ASP.NET MVC style syntax will be the new standard for all the new combined HTTP processing framework. Why negotiated Content? Content negotiation is one of the key features of Web API even though it’s such a relatively simple thing. But it’s also something that’s missing in MVC and once you get used to automatically having your content returned based on Accept headers it’s hard to go back to manually having to create separate methods for different output types as you’ve had to with Microsoft server technologies all along (yes, yes I know other frameworks – including my own – have done this for years but for in the box features this is relatively new from Web API). As a quick review,  Accept Header content negotiation works off the request’s HTTP Accept header:POST http://localhost/mydailydosha/Editable/NegotiateContent HTTP/1.1 Content-Type: application/json Accept: application/json Host: localhost Content-Length: 76 Pragma: no-cache { ElementId: "header", PageName: "TestPage", Text: "This is a nice header" } If I make this request I would expect to get back a JSON result based on my application/json Accept header. To request XML  I‘d just change the accept header:Accept: text/xml and now I’d expect the response to come back as XML. Now this only works with media types that the server can process. In my case here I need to handle JSON, XML, HTML (using Views) and Plain Text. HTML results might need more than just a data return – you also probably need to specify a View to render the data into either by specifying the view explicitly or by using some sort of convention that can automatically locate a view to match. Today ASP.NET MVC doesn’t support this sort of automatic content switching out of the box. Unfortunately, in my application scenario we have an application that started out primarily with an AJAX backend that was implemented with JSON only. So there are lots of JSON results like this:[Route("Customers")] public ActionResult GetCustomers() { return Json(repo.GetCustomers(),JsonRequestBehavior.AllowGet); } These work fine, but they are of course JSON specific. Then a couple of weeks ago, a requirement came in that an old desktop application needs to also consume this API and it has to use XML to do it because there’s no JSON parser available for it. Ooops – stuck with JSON in this case. While it would have been easy to add XML specific methods I figured it’s easier to add basic content negotiation. And that’s what I show in this post. Missteps – IResultFilter, IActionFilter My first attempt at this was to use IResultFilter or IActionFilter which look like they would be ideal to modify result content after it’s been generated using OnResultExecuted() or OnActionExecuted(). Filters are great because they can look globally at all controller methods or individual methods that are marked up with the Filter’s attribute. But it turns out these filters don’t work for raw POCO result values from Action methods. What we wanted to do for API calls is get back to using plain .NET types as results rather than result actions. That is  you write a method that doesn’t return an ActionResult, but a standard .NET type like this:public Customer UpdateCustomer(Customer cust) { … do stuff to customer :-) return cust; } Unfortunately both OnResultExecuted and OnActionExecuted receive an MVC ContentResult instance from the POCO object. MVC basically takes any non-ActionResult return value and turns it into a ContentResult by converting the value using .ToString(). Ugh. The ContentResult itself doesn’t contain the original value, which is lost AFAIK with no way to retrieve it. So there’s no way to access the raw customer object in the example above. Bummer. Creating a NegotiatedResult This leaves mucking around with custom ActionResults. ActionResults are MVC’s standard way to return action method results – you basically specify that you would like to render your result in a specific format. Common ActionResults are ViewResults (ie. View(vn,model)), JsonResult, RedirectResult etc. They work and are fairly effective and work fairly well for testing as well as it’s the ‘standard’ interface to return results from actions. The problem with the this is mainly that you’re explicitly saying that you want a specific result output type. This works well for many things, but sometimes you do want your result to be negotiated. My first crack at this solution here is to create a simple ActionResult subclass that looks at the Accept header and based on that writes the output. I need to support JSON and XML content and HTML as well as text – so effectively 4 media types: application/json, text/xml, text/html and text/plain. Everything else is passed through as ContentResult – which effecively returns whatever .ToString() returns. Here’s what the NegotiatedResult usage looks like:public ActionResult GetCustomers() { return new NegotiatedResult(repo.GetCustomers()); } public ActionResult GetCustomer(int id) { return new NegotiatedResult("Show", repo.GetCustomer(id)); } There are two overloads of this method – one that returns just the raw result value and a second version that accepts an optional view name. The second version returns the Razor view specified only if text/html is requested – otherwise the raw data is returned. This is useful in applications where you have an HTML front end that can also double as an API interface endpoint that’s using the same model data you send to the View. For the application I mentioned above this was another actual use-case we needed to address so this was a welcome side effect of creating a custom ActionResult. There’s also an extension method that directly attaches a Negotiated() method to the controller using the same syntax:public ActionResult GetCustomers() { return this.Negotiated(repo.GetCustomers()); } public ActionResult GetCustomer(int id) { return this.Negotiated("Show",repo.GetCustomer(id)); } Using either of these mechanisms now allows you to return JSON, XML, HTML or plain text results depending on the Accept header sent. Send application/json you get just the Customer JSON data. Ditto for text/xml and XML data. Pass text/html for the Accept header and the "Show.cshtml" Razor view is rendered passing the result model data producing final HTML output. While this isn’t as clean as passing just POCO objects back as I had intended originally, this approach fits better with how MVC action methods are intended to be used and we get the bonus of being able to specify a View to render (optionally) for HTML. How does it work An ActionResult implementation is pretty straightforward. You inherit from ActionResult and implement the ExecuteResult method to send your output to the ASP.NET output stream. ActionFilters are an easy way to effectively do post processing on ASP.NET MVC controller actions just before the content is sent to the output stream, assuming your specific action result was used. Here’s the full code to the NegotiatedResult class (you can also check it out on GitHub):/// <summary> /// Returns a content negotiated result based on the Accept header. /// Minimal implementation that works with JSON and XML content, /// can also optionally return a view with HTML. /// </summary> /// <example> /// // model data only /// public ActionResult GetCustomers() /// { /// return new NegotiatedResult(repo.Customers.OrderBy( c=> c.Company) ) /// } /// // optional view for HTML /// public ActionResult GetCustomers() /// { /// return new NegotiatedResult("List", repo.Customers.OrderBy( c=> c.Company) ) /// } /// </example> public class NegotiatedResult : ActionResult { /// <summary> /// Data stored to be 'serialized'. Public /// so it's potentially accessible in filters. /// </summary> public object Data { get; set; } /// <summary> /// Optional name of the HTML view to be rendered /// for HTML responses /// </summary> public string ViewName { get; set; } public static bool FormatOutput { get; set; } static NegotiatedResult() { FormatOutput = HttpContext.Current.IsDebuggingEnabled; } /// <summary> /// Pass in data to serialize /// </summary> /// <param name="data">Data to serialize</param> public NegotiatedResult(object data) { Data = data; } /// <summary> /// Pass in data and an optional view for HTML views /// </summary> /// <param name="data"></param> /// <param name="viewName"></param> public NegotiatedResult(string viewName, object data) { Data = data; ViewName = viewName; } public override void ExecuteResult(ControllerContext context) { if (context == null) throw new ArgumentNullException("context"); HttpResponseBase response = context.HttpContext.Response; HttpRequestBase request = context.HttpContext.Request; // Look for specific content types if (request.AcceptTypes.Contains("text/html")) { response.ContentType = "text/html"; if (!string.IsNullOrEmpty(ViewName)) { var viewData = context.Controller.ViewData; viewData.Model = Data; var viewResult = new ViewResult { ViewName = ViewName, MasterName = null, ViewData = viewData, TempData = context.Controller.TempData, ViewEngineCollection = ((Controller)context.Controller).ViewEngineCollection }; viewResult.ExecuteResult(context.Controller.ControllerContext); } else response.Write(Data); } else if (request.AcceptTypes.Contains("text/plain")) { response.ContentType = "text/plain"; response.Write(Data); } else if (request.AcceptTypes.Contains("application/json")) { using (JsonTextWriter writer = new JsonTextWriter(response.Output)) { var settings = new JsonSerializerSettings(); if (FormatOutput) settings.Formatting = Newtonsoft.Json.Formatting.Indented; JsonSerializer serializer = JsonSerializer.Create(settings); serializer.Serialize(writer, Data); writer.Flush(); } } else if (request.AcceptTypes.Contains("text/xml")) { response.ContentType = "text/xml"; if (Data != null) { using (var writer = new XmlTextWriter(response.OutputStream, new UTF8Encoding())) { if (FormatOutput) writer.Formatting = System.Xml.Formatting.Indented; XmlSerializer serializer = new XmlSerializer(Data.GetType()); serializer.Serialize(writer, Data); writer.Flush(); } } } else { // just write data as a plain string response.Write(Data); } } } /// <summary> /// Extends Controller with Negotiated() ActionResult that does /// basic content negotiation based on the Accept header. /// </summary> public static class NegotiatedResultExtensions { /// <summary> /// Return content-negotiated content of the data based on Accept header. /// Supports: /// application/json - using JSON.NET /// text/xml - Xml as XmlSerializer XML /// text/html - as text, or an optional View /// text/plain - as text /// </summary> /// <param name="controller"></param> /// <param name="data">Data to return</param> /// <returns>serialized data</returns> /// <example> /// public ActionResult GetCustomers() /// { /// return this.Negotiated( repo.Customers.OrderBy( c=> c.Company) ) /// } /// </example> public static NegotiatedResult Negotiated(this Controller controller, object data) { return new NegotiatedResult(data); } /// <summary> /// Return content-negotiated content of the data based on Accept header. /// Supports: /// application/json - using JSON.NET /// text/xml - Xml as XmlSerializer XML /// text/html - as text, or an optional View /// text/plain - as text /// </summary> /// <param name="controller"></param> /// <param name="viewName">Name of the View to when Accept is text/html</param> /// /// <param name="data">Data to return</param> /// <returns>serialized data</returns> /// <example> /// public ActionResult GetCustomers() /// { /// return this.Negotiated("List", repo.Customers.OrderBy( c=> c.Company) ) /// } /// </example> public static NegotiatedResult Negotiated(this Controller controller, string viewName, object data) { return new NegotiatedResult(viewName, data); } } Output Generation – JSON and XML Generating output for XML and JSON is simple – you use the desired serializer and off you go. Using XmlSerializer and JSON.NET it’s just a handful of lines each to generate serialized output directly into the HTTP output stream. Please note this implementation uses JSON.NET for its JSON generation rather than the default JavaScriptSerializer that MVC uses which I feel is an additional bonus to implementing this custom action. I’d already been using a custom JsonNetResult class previously, but now this is just rolled into this custom ActionResult. Just keep in mind that JSON.NET outputs slightly different JSON for certain things like collections for example, so behavior may change. One addition to this implementation might be a flag to allow switching the JSON serializer. Html View Generation Html View generation actually turned out to be easier than anticipated. Initially I used my generic ASP.NET ViewRenderer Class that can render MVC views from any ASP.NET application. However it turns out since we are executing inside of an active MVC request there’s an easier way: We can simply create a custom ViewResult and populate its members and then execute it. The code in text/html handling code that renders the view is simply this:response.ContentType = "text/html"; if (!string.IsNullOrEmpty(ViewName)) { var viewData = context.Controller.ViewData; viewData.Model = Data; var viewResult = new ViewResult { ViewName = ViewName, MasterName = null, ViewData = viewData, TempData = context.Controller.TempData, ViewEngineCollection = ((Controller)context.Controller).ViewEngineCollection }; viewResult.ExecuteResult(context.Controller.ControllerContext); } else response.Write(Data); which is a neat and easy way to render a Razor view assuming you have an active controller that’s ready for rendering. Sweet – dependency removed which makes this class self-contained without any external dependencies other than JSON.NET. Summary While this isn’t exactly a new topic, it’s the first time I’ve actually delved into this with MVC. I’ve been doing content negotiation with Web API and prior to that with my REST library. This is the first time it’s come up as an issue in MVC. But as I have worked through this I find that having a way to specify both HTML Views *and* JSON and XML results from a single controller certainly is appealing to me in many situations as we are in this particular application returning identical data models for each of these operations. Rendering content negotiated views is something that I hope ASP.NET vNext will provide natively in the combined MVC and WebAPI model, but we’ll see how this actually will be implemented. In the meantime having a custom ActionResult that provides this functionality is a workable and easily adaptable way of handling this going forward. Whatever ends up happening in ASP.NET vNext the abstraction can probably be changed to support the native features of the future. Anyway I hope some of you found this useful if not for direct integration then as insight into some of the rendering logic that MVC uses to get output into the HTTP stream… Related Resources Latest Version of NegotiatedResult.cs on GitHub Understanding Action Controllers Rendering ASP.NET Views To String© Rick Strahl, West Wind Technologies, 2005-2014Posted in MVC  ASP.NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Take Advantage of Oracle's Ongoing Assurance Effort!

    - by eric.maurice
    Hi, this is Eric Maurice again! A few years ago, I posted a blog entry, which discussed the psychology of patching. The point of this blog entry was that a natural tendency existed for systems and database administrators to be reluctant to apply patches, even security patches, because of the fear of "breaking" the system. Unfortunately, this belief in the principle "if it ain't broke, don't fix it!" creates significant risks for organizations. Running systems without applying the proper security patches can greatly compromise the security posture of the organization because the security controls available in the affected system may be compromised as a result of the existence of the unfixed vulnerabilities. As a result, Oracle continues to strongly recommend that customers apply all security fixes as soon as possible. Most recently, I have had a number of conversations with customers who questioned the need to upgrade their highly stable but otherwise unsupported Oracle systems. These customers wanted to know more about the kind of security risks they were exposed to, by running obsolete versions of Oracle software. As per Oracle Support Policies, Critical Patch Updates are produced for currently supported products. In other words, Critical Patch Updates are not created by Oracle for product versions that are no longer covered under the Premier Support or Extended Support phases of the Lifetime Support Policy. One statement used in each Critical Patch Update Advisory is particularly important: "We recommend that customers upgrade to a supported version of Oracle products in order to obtain patches. Unsupported products, releases and versions are not tested for the presence of vulnerabilities addressed by this Critical Patch Update. However, it is likely that earlier versions of affected releases are also affected by these vulnerabilities." The purpose of this warning is to inform Oracle customers that a number of the vulnerabilities fixed in each Critical Patch Update may affect older versions of a specific product line. In other words, each Critical Patch Update provides a number of fixes for currently supported versions of a given product line (this information is listed for each bug in the Risk Matrices of the Critical Patch Update Advisory), but the unsupported versions in the same product line, while they may be affected by the vulnerabilities, will not receive the fixes, and are therefore vulnerable to attacks. The risk assumed by organizations wishing to remain on unsupported versions is amplified by the behavior of malicious hackers, who typically will attempt to, and sometimes succeed in, reverse-engineering the content of vendors' security fixes. As a result, it is not uncommon for exploits to be published soon after Oracle discloses vulnerabilities with the release of a Critical Patch Update or Security Alert. Let's consider now the nature of the vulnerabilities that may exist in obsolete versions of Oracle software. A number of severe vulnerabilities have been fixed by Oracle over the years. While Oracle does not test unsupported products, releases and versions for the presence of vulnerabilities addressed by each Critical Patch Update, it should be assumed that a number of the vulnerabilities fixed with the Critical Patch Update program do exist in unsupported versions (regardless of the product considered). The most severe vulnerabilities fixed in past Critical Patch Updates may result in full compromise of the targeted systems, down to the OS level, by remote and unauthenticated users (these vulnerabilities receive a CVSS Base Score of 10.0) or almost as critically, may result in the compromise of the affected systems (without compromising the underlying OS) by a remote and unauthenticated users (these vulnerabilities receive a CVSS Base Score of 7.5). Such vulnerabilities may result in complete takeover of the targeted machine (for the CVSS 10.0), or may result in allowing the attacker the ability to create a denial of service against the affected system or even hijacking or stealing all the data hosted by the compromised system (for the CVSS 7.5). The bottom line is that organizations should assume the worst case: that the most critical vulnerabilities are present in their unsupported version; therefore, it is Oracle's recommendation that all organizations move to supported systems and apply security patches in a timely fashion. Organizations that currently run supported versions but may be late in their security patch release level can quickly catch up because most Critical Patch Updates are cumulative. With a few exceptions noted in Oracle's Critical Patch Update Advisory, the application of the most recent Critical Patch Update will bring these products to current security patch level and provide the organization with the best possible security posture for their patch level. Furthermore, organizations are encouraged to upgrade to most recent versions as this will greatly improve their security posture. At Oracle, our security fixing policies state that security fixes are produced for the main code line first, and as a result, our products benefit from the mistakes made in previous version(s). Our ongoing assurance effort ensures that we work diligently to fix the vulnerabilities we find, and aim at constantly improving the security posture our products provide by default. Patch sets include numerous in-depth fixes in addition to those delivered through the Critical Patch Update and, in certain instances, important security fixes require major architectural changes that can only be included in new product releases (and cannot be backported through the Critical Patch Update program). For More Information: • Mary Ann Davidson is giving a webcast interview on Oracle Software Security Assurance on February 24th. The registration link for attending this webcast is located at http://event.on24.com/r.htm?e=280304&s=1&k=6A7152F62313CA09F77EBCEEA9B6294F&partnerref=EricMblog • A blog entry discussing Oracle's practices for ensuring the quality of Critical patch Updates can be found at http://blogs.oracle.com/security/2009/07/ensuring_critical_patch_update_quality.html • The blog entry "To patch or not to patch" is located at http://blogs.oracle.com/security/2008/01/to_patch_or_not_to_patch.html • Oracle's Support Policies are located at http://www.oracle.com/us/support/policies/index.html • The Critical Patch Update & Security Alert page is located at http://www.oracle.com/technetwork/topics/security/alerts-086861.html

    Read the article

  • OData Mix10 &ndash; Part Dos

    - by Jon Dalberg
    The other day I had a snazzy post on fetching all the video (WMV) files from Mix ‘10. A simple, console application that grabbed the urls from the OData feed and downloaded the videos. I wanted to change that app to fire the OData query asynchronously so here’s what resulted: 1: static void Main(string[] args) 2: { 3: var mix = new Mix.EventEntities(new Uri("http://api.visitmix.com/OData.svc")); 4:   5: var temp = mix.Files.Where(f => f.TypeName == "WMV"); 6: var query = temp as DataServiceQuery<Mix.File>; 7:   8: query.BeginExecute(OnFileQueryComplete, query); 9:   10: // waiting... 11: Console.ReadLine(); 12: } 13:   14: static void OnFileQueryComplete(IAsyncResult result) 15: { 16: var query = result.AsyncState as DataServiceQuery<Mix.File>; 17: var response = query.EndExecute(result); 18:   19: var web = new WebClient(); 20:   21: var myVideos = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyVideos), "Mix10"); 22:   23: Directory.CreateDirectory(myVideos); 24:   25: foreach (Mix.File f in response) 26: { 27: var fileName = new Uri(f.Url).Segments.Last(); 28: Console.WriteLine(f.Url); 29: web.DownloadFile(f.Url, Path.Combine(myVideos, fileName)); 30: } 31: } There are two important things here that are not explained well in the MSDN docs: See lines 5 and 6? That’s where I query for the WMV files and it returns an IQueryable<T>. You *have* to cast that to a DataServiceQuery<T> and then call BeginExecute. The documented example does not filter so it didn’t show that step. Line 16 shows the correct way to get the previously executed DataServiceQuery<T> from the async result. If you looked at the MSDN example docs it shows (incorrectly) just casting the result, like this: // wrong var query = result as DataServiceQuery<Mix.File>; Other than those items it is relatively straight forward and we’re all async-ified. Enjoy!

    Read the article

  • No root file system - Alternate CD + LVM

    - by Carlos
    I am trying to install 11.10 as dual boot with Windows 7. I have all partitioned well as you can see here: http://www.flickr.com/photos/42897978@N00/7111180385/ I burned the Alternate CD ISO to a CD. Boot from it and followed instructions to Partitioning. There, I configured the LVM partitions as follows: Volume Group ubuntu-vg - Uses Physical Volume /dev/sda7 380GB - Provides Logical Volume home-lv 60GB - Provides Logical Volume root-lv 60GB - Provides Logical Volume swap-lv 6GB That is all I want (note that my /boot is outside of LVM) Then when I say that all is Ok and to write it to disk and continue with the installation, I get the following error. !! Partition Disks No root file system No root file system is defined Please correct this from the partitioning menu. What should I fix and how? I tried issuing the "Revert changes to partitions", but nothing happens. It seems that the LVM configuration has already been written to the CD. HELP!!

    Read the article

  • When following SRP, how should I deal with validating and saving entities?

    - by Kristof Claes
    I've been reading Clean Code and various online articles about SOLID lately, and the more I read about it, the more I feel like I don't know anything. Let's say I'm building a web application using ASP.NET MVC 3. Let's say I have a UsersController with a Create action like this: public class UsersController : Controller { public ActionResult Create(CreateUserViewModel viewModel) { } } In that action method I want to save a user to the database if the data that was entered is valid. Now, according to the Single Responsibility Principle an object should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility. Since validation and saving to the database are two separate responsibilities, I guess I should create to separate class to handle them like this: public class UsersController : Controller { private ICreateUserValidator validator; private IUserService service; public UsersController(ICreateUserValidator validator, IUserService service) { this.validator = validator; this.service= service; } public ActionResult Create(CreateUserViewModel viewModel) { ValidationResult result = validator.IsValid(viewModel); if (result.IsValid) { service.CreateUser(viewModel); return RedirectToAction("Index"); } else { foreach (var errorMessage in result.ErrorMessages) { ModelState.AddModelError(String.Empty, errorMessage); } return View(viewModel); } } } That makes some sense to me, but I'm not at all sure that this is the right way to handle things like this. It is for example entirely possible to pass an invalid instance of CreateUserViewModel to the IUserService class. I know I could use the built in DataAnnotations, but what when they aren't enough? Image that my ICreateUserValidator checks the database to see if there already is another user with the same name... Another option is to let the IUserService take care of the validation like this: public class UserService : IUserService { private ICreateUserValidator validator; public UserService(ICreateUserValidator validator) { this.validator = validator; } public ValidationResult CreateUser(CreateUserViewModel viewModel) { var result = validator.IsValid(viewModel); if (result.IsValid) { // Save the user } return result; } } But I feel I'm violating the Single Responsibility Principle here. How should I deal with something like this?

    Read the article

  • Why my site not linking with google.com?

    - by nishant
    i am very tired about my website ranking in google. i am dong hard work about it but not getting anywhere my site in google. actually i am web master of a www.panbeli.in matrimony website INDIA. i am trying to improve its visibility in google last 5 month but not getting any positive result. but other search engine giving good result like yahoo and bing but google showing no any result in top 20 page result. my website is www.panbeli.in and my keyword are- bari samaj bari matrimony bari community bari shadi panbeli please help me if you can, b'coz i am very frustrated about it. my domain age is 4 years when i type link:panbeli.in in google search does not appear any pages from my site in google. whats the meaning is that?my site does not indexed in google?

    Read the article

  • Why did Ubuntu and Windows start hanging mysteriously after I took a vacation?

    - by Ashrey Goel
    I installed Ubuntu alongside my Windows 7, after partitioning my HDD using Easeus partitioning manager. It was working perfectly, no problems, no data lost or corruption. Then I went away for 2 days and in my absence I don't know what happened in that period, now both Windows 7 and Ubuntu keep hanging continuously, like when you paint and change a brush it'll hang, I mean on very simple commands and I know my computer does not hang on such petty things. I use it for developing music and the specification are: Model: DELL-XPS Processor: Intel i5, 2.53 GHz RAM/Memory: 4GB Hard disk size: 500GB HDD Windows 7 partition: 417 GB Ubuntu Partition: 50 GB Please Help.

    Read the article

  • DX11 - Weird shader behavior with and without branching

    - by Martin Perry
    I have found problem in my shader code, which I dont´t know how to solve. I want to rewrite this code without "ifs" tmp = evaluate and result is 0 or 1 (nothing else) if (tmp == 1) val = X1; if (tmp == 0) val = X2; I rewite it this way, but this piece of code doesn ´t word correctly tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 val = !tmp * X2 However if I change it to: tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 if (!tmp) val = !tmp * X2 It works fine... but it is useless because of "if", which need to be eliminated I honestly don´t understand it Posted Image . I tried compilation with NO and FULL optimalization, result is same

    Read the article

  • Code Contracts: How they look after compiling?

    - by DigiMortal
    When you are using new tools that make also something at code level then it is good idea to check out what additions are made to code during compilation. Code contracts have simple syntax when we are writing code at Visual Studio but what happens after compilation? Are our methods same as they look in code or are they different after compilation? In this posting I will show you how code contracts look after compiling. In my previous examples about code contracts I used randomizer class with method called GetRandomFromRangeContracted. public int GetRandomFromRangeContracted(int min, int max) {     Contract.Requires<ArgumentOutOfRangeException>(         min < max,         "Min must be less than max"     );       Contract.Ensures(         Contract.Result<int>() >= min &&         Contract.Result<int>() <= max,         "Return value is out of range"     );       return _generator.Next(min, max); } Okay, it is nice to dream about similar code when we open our assembly with Reflector and disassemble it. But… this time we have something interesting. While reading this code don’t feel uncomfortable about the names of variables. This is disassembled code. .NET Framework internally allows these names. It is our compilators that doesn’t accept them when we are building our code. public int GetRandomFromRangeContracted(int min, int max) {     int Contract.Old(min);     int Contract.Old(max);     if (__ContractsRuntime.insideContractEvaluation <= 4)     {         try         {             __ContractsRuntime.insideContractEvaluation++;             __ContractsRuntime.Requires<ArgumentOutOfRangeException>(                min < max,                "Min must be less than max", "min < max");         }         finally         {             __ContractsRuntime.insideContractEvaluation--;         }     }     try     {         Contract.Old(min) = min;     }     catch (Exception exception1)     {         if (exception1 == null)         {             throw;         }     }     try     {         Contract.Old(max) = max;         catch (Exception exception2)     {         if (exception2 == null)         {             throw;         }     }     int CS$1$0000 = this._generator.Next(min, max);     int Contract.Result<int>() = CS$1$0000;     if (__ContractsRuntime.insideContractEvaluation <= 4)     {         try         {             __ContractsRuntime.insideContractEvaluation++;             __ContractsRuntime.Ensures(                (Contract.Result<int>() >= Contract.Old(min)) &&                (Contract.Result<int>() <= Contract.Old(max)),                "Return value is out of range",                "Contract.Result<int>() >= min && Contract.Result<int>() <= max");         }         finally         {             __ContractsRuntime.insideContractEvaluation--;         }     }     return Contract.Result<int>(); } As we can see then contracts are not simply if-then-else checks and exceptions throwing. We can see that there is counter that is incremented before checks and decremented after these whatever the result of check was. One thing that is annoying for me are null checks for exception1 and exception2. Is there really some situation possible when null is thrown instead of some instance that is Exception or that inherits from exception? Conclusion Code contracts are more complex mechanism that it seems when we look at it on our code level. Internally there are done more things than we know. I don’t say it is wrong, it is just good to know how our code looks after compiling. Looking at this example it is sure we need also performance tests for contracted code to see how heavy is their impact to system performance when we run code that makes heavy use of code contracts.

    Read the article

  • Loose typing not applied to objects

    - by TecBrat
    I have very little experience working with classes and object. I work in a loosely typed language, PHP. I was working with a SimpleXML object and ran into a problem where I was trying to do math with an element of that object like $results->ProductDetail->{'Net'.$i}; If I echoed that value, I'd get 0.53 but when I tried to do math with it, it was converted to 0 Is there a reason that a loosely typed language would not recognize that as a float and handle it as such? Why would "echo" handle it as a string but the math fail to convert it? Example: $xml='<?xml version="1.0" encoding="UTF-8" ?>'; $xml.='<Test> <Item> <Price>0.53</Price> </Item> </Test>'; $result=simplexml_load_string($xml); var_dump($result->Item->Price); echo '<br>'; echo $result->Item->Price; echo '<br>'; echo 1+$result->Item->Price; echo '<br>'; echo 1+(float)$result->Item->Price; Output: object(SimpleXMLElement)#4 (1) { [0]=> string(4) "0.53" } 0.53 1 1.53

    Read the article

  • No root partition

    - by christian ashton
    I'm having serious problems trying to install ubuntu 14.04 on my Toshiba Satellite C850 laptop. Whenever I try to install it it says "No root file system found. Please correct this error from the partitioning menu." I've tried to create new partitions but I don't know what size to make them etc. can someone please help me on this? (Is there a way to create partitions through the terminal? I'm new to ubuntu) but i was on it 2 days ago, laptops crashed and now I can't get past the installation. I've created a new partitioning table just need guidance on which partitions to create and how big to make them. I have absolutely no partitions whatsoever on my hard drive (or whatever the partitions are part of) so it will need to be from the very start. Thanks in advance!

    Read the article

  • "Bad apple" algorithm, or process crashes shared sandbox

    - by Roger Lipscombe
    I'm looking for an algorithm to handle the following problem, which I'm (for now) calling the "bad apple" algorithm. The problem I've got a N processes running in M sandboxes, where N M. It's impractical to give each process its own sandbox. At least one of those processes is badly-behaved, and is bringing down the entire sandbox, thus killing all of the other processes. If it was a single badly-behaved process, then I could use a simple bisection to put half of the processes in one sandbox, and half in another sandbox, until I found the miscreant. This could probably be extended by partitioning the set into more than two pieces until the badly-behaved process was found. For example, partitioning into 8 sets allows me to eliminate 7/8 of the search space at each step, and so on. The question If more than one process is badly-behaved -- including the possibility that they're all badly-behaved -- does this naive algorithm "work"? Is it guaranteed to work within some sensible bounds?

    Read the article

  • Dealing with the node.js callback pyramid

    - by thecoop
    I've just started using node, and one thing I've quickly noticed is how quickly callbacks can build up to a silly level of indentation: doStuff(arg1, arg2, function(err, result) { doMoreStuff(arg3, arg4, function(err, result) { doEvenMoreStuff(arg5, arg6, function(err, result) { omgHowDidIGetHere(); }); }); }); The official style guide says to put each callback in a separate function, but that seems overly restrictive on the use of closures, and making a single object declared in the top level available several layers down, as the object has to be passed through all the intermediate callbacks. Is it ok to use function scope to help here? Put all the callback functions that need access to a global-ish object inside a function that declares that object, so it goes into a closure? function topLevelFunction(globalishObject, callback) { function doMoreStuffImpl(err, result) { doMoreStuff(arg5, arg6, function(err, result) { callback(null, globalishObject); }); } doStuff(arg1, arg2, doMoreStuffImpl); } and so on for several more layers... Or are there frameworks etc to help reduce the levels of indentation without declaring a named function for every single callback? How do you deal with the callback pyramid?

    Read the article

  • Dealing with the node callback pyramid

    - by thecoop
    I've just started using node, and one thing I've quickly noticed is how quickly callbacks can build up to a silly level of indentation: doStuff(arg1, arg2, function(err, result) { doMoreStuff(arg3, arg4, function(err, result) { doEvenMoreStuff(arg5, arg6, function(err, result) { omgHowDidIGetHere(); }); }); }); The official style guide says to put each callback in a separate function, but that seems overly restrictive on the use of closures, and making a single object declared in the top level available several layers down, as the object has to be passed through all the intermediate callbacks. Is it ok to use function scope to help here? Put all the callback functions that need access to a global-ish object inside a function that declares that object, so it goes into a closure? function topLevelFunction(globalishObject, callback) { function doMoreStuffImpl(err, result) { doMoreStuff(arg5, arg6, function(err, result) { callback(null, globalishObject); }); } doStuff(arg1, arg2, doMoreStuffImpl); } and so on for several more layers... Or are there frameworks etc to help reduce the levels of indentation without declaring a named function for every single callback? How do you deal with the callback pyramid?

    Read the article

  • Assignments in mock return values

    - by zerkms
    (I will show examples using php and phpunit but this may be applied to any programming language) The case: let's say we have a method A::foo that delegates some work to class M and returns the value as-is. Which of these solutions would you choose: $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue('baz')); $obj = new A($mock); $this->assertEquals('baz', $obj->foo()); or $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue($result = 'baz')); $obj = new A($mock); $this->assertEquals($result, $obj->foo()); or $result = 'baz'; $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue($result)); $obj = new A($mock); $this->assertEquals($result, $obj->foo()); Personally I always follow the 2nd solution, but just 10 minutes ago I had a conversation with couple of developers who said that it is "too tricky" and chose 3rd or 1st. So what would you usually do? And do you have any conventions to follow in such cases?

    Read the article

  • How do you proactively guard against errors of omission?

    - by Gabriel
    I'll preface this with I don't know if anyone else who's been programming as long as I have actually has this problem, but at the very least, the answer might help someone with less xp. I just stared at this code for 5 minutes, thinking I was losing my mind that it didn't work: var usedNames = new HashSet<string>(); Func<string, string> l = (s) => { for (int i = 0; ; i++) { var next = (s + i).TrimEnd('0'); if (!usedNames.Contains(next)) { return next; } } }; Finally I noticed I forgot to add the used name to the hash set. Similarly, I've spent minutes upon minutes over omitting context.SaveChanges(). I think I get so distracted by the details that I'm thinking about that some really small details become invisible to me - it's almost at the level of mental block. Are there tactics to prevent this? update: a side effect of asking this was fixing the error it would have for i 9 (Thanks!) var usedNames = new HashSet<string>(); Func<string, string> name = (s) => { string result = s; if(usedNames.Contains(s)) for (int i = 1; ; result = s + i++) if (!usedNames.Contains(result)) break; usedNames.Add(result); return result; };

    Read the article

  • How to determine which cells in a grid intersect with a given triangle?

    - by Ray Dey
    I'm currently writing a 2D AI simulation, but I'm not completely certain how to check whether the position of an agent is within the field of view of another. Currently, my world partitioning is simple cell-space partitioning (a grid). I want to use a triangle to represent the field of view, but how can I calculate the cells that intersect with the triangle? Similar to this picture: The red areas are the cells I want to calculate, by checking whether the triangle intersects those cells. Thanks in advance. EDIT: Just to add to the confusion (or perhaps even make it easier). Each cell has a min and max vector where the min is the bottom left corner and the max is the top right corner.

    Read the article

  • Updating query results

    - by Francisco Garcia
    Within a DDD and CQRS context, a query result is displayed as table rows. Whenever new rows are inserted or deleted, their positions must be calculated by comparing the previous query result with the most recent one. This is needed to visualize with an animation new or deleted rows. The model of my view contains an array of the displayed query results. But I need a place to compare its contents against the latest query. Right now I consider my model view part of my application layer, but the comparison of two query result sets seems something that must be done within the domain layer. Which component should cache a query result and which one compare them? Are view models (and their cached contents) supposed to be in the application layer?

    Read the article

  • Accurate way to check latency between Server ?

    - by Rick
    Hi, First of all, I hope my post are is posted in the right section . Below are something i got confused hope you one can help. My business partner 's IP server are in: Sanjose, CA , and I am looking for a datacenter which give the least latency to that IP. I found 2 . 1st datcenter is in San Francisco, CA . 2nd data center is in Newyork. Then I do the ping from each datacenter to the my partner's IP. 1st datcenter's result: 75ms 2nd datacenter's result: 2ms I 've done multiple time, 2nd datacenter always give better result. Now my question is: Isn't the 1st datacenter suppose to give better result, since their location are closer ? How come the result produce different, what is the accurate way to check latency ? Thanks

    Read the article

  • Using Bulk Operations with Coherence Off-Heap Storage

    - by jpurdy
    Some NamedCache methods (including clear(), entrySet(Filter), aggregate(Filter, …), invoke(Filter, …)) may generate large intermediate results. The size of these intermediate results may result in out-of-memory exceptions on cache servers, and in some cases on cache clients. This may be particularly problematic if out-of-memory exceptions occur on more than one server (since these operations may be cluster-wide) or if these exceptions cause additional memory use on the surviving servers as they take over partitions from the failed servers. This may be particularly problematic with clusters that use off-heap storage (such as NIO or Elastic Data storage options), since these storage options allow greater than normal cache sizes but do nothing to address the size of intermediate results or final result sets. One workaround is to use a PartitionedFilter, which allows the application to break up a larger operation into a number of smaller operations, each targeting either a set of partitions (useful for reducing the load on each cache server) or a set of members (useful for managing client result set sizes). It is also possible to return a key set, and then pull in the full entries using that key set. This also allows the application to take advantage of near caching, though this may be of limited value if the result is large enough to result in near cache thrashing.

    Read the article

  • Depth Map resolution shifting

    - by user3669538
    the problem is with shadow mapping as you can see, actually it works fine but in a certain condition that the Depth Map size must be equal to the size of rendering buffer, I use an infinite directional light so if the window is 800x600 the depth map must be 800x600, and when i change the size of the shadow map to be 900x600 it starts to be shifted and when it's size be 1024x1024 it also shifts till it disappears the GLSL shadow function float calcShadow(sampler2D Dmap, vec4 coor){ vec4 sh = vec4((coor.xyz/coor.w),1); sh.z *= 0.9; return step(sh.z,texture2D(Dmap,sh.xy).r); } here's the result when it's the same size as the window Colored result & Depth Map and here's the shifted result, as you can notice the depth map is exactly as the previous one with the addition of white space to the right. Colored result http://goo.gl/5lYIFV Depth Map http://goo.gl/7320Dd

    Read the article

  • Site loads Extremely slowly over HTTPS; loads perfectly over HTTP - Sudden and new issue

    - by guest234239048
    My business' website suddenly started loading extremely slowly over HTTPS last night and continues through today. The facts: Page loadtime via HTTPS - 2 minutes +, HTTP - 3 seconds max. NO updates were done on any site files in the past 2 weeks. This is on Shared Hosting HTTPS has worked perfectly over the past 9 months then suddenly failed yesterday. Hosting company and SSL issuer say there are no problems on their end. - Searches for others having similar issues return no results, it appears to be just me... Site is primarily run via php/mysql Currently attempted troubleshooting: Tried all major browsers and different versions - same result. Tried 2 separate ISPs - same result. Tried proxies - same result. Tried 3 separate computers - same result. I'm basically at a total loss here. Does anyone know what could cause such a thing to happen? Please help guide troubleshooting.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >