Search Results

Search found 2058 results on 83 pages for 'chain of responsibility'.

Page 46/83 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Trouble getting OS fingerprinting to work in iptables

    - by user1197457
    Everyone, As I understand it, OSF has been merged with the Kernel since 2.6.before-my-kernel-version. Yet when I do something like this: iptables -I INPUT -j ACCEPT -p tcp -m osf --genre Linux --log 0 --ttl 2 and I get an error like: iptables: No chain/target/match by that name iptables -L Shows no rules because I did an iptables -F at one point. ALSO, the following command: cat /proc/net/ip_tables_matches Does not show "osf" on the list. A google doesn't seem to help. I've also installed iptables-devel in hopes I'd be able to load the osf module. Sadly I haven't been able to get that to work. Centos 6.4 minimal Any guidance?

    Read the article

  • Perl - WWW::Mechanize Cookie Session Id is being reset with every get(), how to make it stop?

    - by Phill Pafford
    So I'm scraping a site that I have access to via HTTPS, I can login and start the process but each time I hit a new page (URL) the cookie Session Id changes. How do I keep the logged in Cookie Session Id? #!/usr/bin/perl -w use strict; use warnings; use WWW::Mechanize; use HTTP::Cookies; use LWP::Debug qw(+); use HTTP::Request; use LWP::UserAgent; use HTTP::Request::Common; my $un = 'username'; my $pw = 'password'; my $url = 'https://subdomain.url.com/index.do'; my $agent = WWW::Mechanize->new(cookie_jar => {}, autocheck => 0); $agent->{onerror}=\&WWW::Mechanize::_warn; $agent->agent('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.3) Gecko/20100407 Ubuntu/9.10 (karmic) Firefox/3.6.3'); $agent->get($url); $agent->form_name('form'); $agent->field(username => $un); $agent->field(password => $pw); $agent->click("Log In"); print "After Login Cookie: "; print $agent->cookie_jar->as_string(); print "\n\n"; my $searchURL='https://subdomain.url.com/search.do'; $agent->get($searchURL); print "After Search Cookie: "; print $agent->cookie_jar->as_string(); print "\n"; The output: After Login Cookie: Set-Cookie3: JSESSIONID=367C6D; path="/thepath"; domain=subdomina.url.com; path_spec; secure; discard; version=0 After Search Cookie: Set-Cookie3: JSESSIONID=855402; path="/thepath"; domain=subdomain.com.com; path_spec; secure; discard; version=0 Also I think the site requires a CERT (Well in the browser it does), would this be the correct way to add it? $ENV{HTTPS_CERT_FILE} = 'SUBDOMAIN.URL.COM'; ## Insert this after the use HTTP::Request... Also for the CERT In using the first option in this list, is this correct? X.509 Certificate (PEM) X.509 Certificate with chain (PEM) X.509 Certificate (DER) X.509 Certificate (PKCS#7) X.509 Certificate with chain (PKCS#7)

    Read the article

  • Placing Varibles into an external Sheet

    - by Leslie Peer
    Trying to Build an Online D&d program which stores the character info into Tables my problem is the game works just fine while your playing but as soon as you exit game all varibles are lost which means you have to restart from scratch the next time you log on... So this is a Two Fold Question What is the Best type of External Sheet to save it on... and two How to access sheet for saving and Loading Below are Varibles <SCRIPT> Name1="Tabor Bloomfield"; Name2="Sam Wrightfield"; Name3="Gavin Hartfild"; Name4="Gail Quickfoot"; Name5="Robert Gragorian"; Name6="Peter Shain"; Class1="MagicUser"; Class2="Fighter"; Class3="Fighter"; Class4="Thief"; Class5="Cleric"; Class6="Fighter"; Level1=23; Level2=1; Level3=1; Level4=2; Level5=2; Level6=1; Hpts1=145; Hpts2=14; Hpts3=13; Hpts4=8; Hpts5=12; Hpts6=15; Armor1="Robe of Protection +5"; Armor2="Splinted Armor"; Armor3="Chain Armor"; Armor4="Leather Armor"; Armor5="Chain Armor"; Armor6="Splinted Armor"; Ac1a=5; Ac2a=3; Ac3a=3; Ac4a=4; Ac5a=2; Ac6a=3; Armor1b="Ring of Protection +5"; Armor2b="Small Shield"; Armor3b="Small Shield"; Armor4b="Wooden Shield"; Armor5b="Large Shield"; Armor6b="Small Shield"; Ac1b=5; Ac2b=1; Ac3b=1; Ac4b=1; Ac5b=1; Ac6b=1; Str1=21; Str2=16; Str3=14; Str4=13; Str5=14; Str6=13; Int1=19; Int2=11; Int3=12; Int4=13; Int5=14; Int6=13; Wis1=18; Wis2=12; Wis3=14; Wis4=13; Wis5=14; Wis6=12; Dex1=19; Dex2=14; Dex3=13; Dex4=15; Dex5=14; Dex6=12; Con1=19; Con2=15; Con3=16; Con4=13; Con5=12; Con6=10; Chr1=21; Chr2=14; Chr3=13; Chr4=12; Chr5=14; Chr6=13; </SCRIPT> File name ="gamestats" Path="trellian Webpage/droves E and F/gamestats have tryed html Page,Javascript,Creating a serperate table page and putting the varibles into cells...But at a lost on how to arrive at a solution

    Read the article

  • Servlet Filter: Socket need to be referenced in doFilter()

    - by Craig m
    Right now I have a filter that has the sockets opened in the init and for some reason when I open them in doFilter() it doesn't work with the server app right so I have no choice but to put it in the init I need to be able to reference the outSide.println("test"); in doFilter() so I can send that to my server app every time the if statement it in is is tripped. Heres my code: import java.net.*; import java.io.*; import java.util.*; import javax.servlet.*; import javax.servlet.http.*; public final class IEFilter implements Filter { public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { String browser = ""; String blockInfo; String address = request.getRemoteAddr(); if(((HttpServletRequest)request).getHeader ("User-Agent").indexOf("MSIE") >= 0) { browser = "Internet Explorer"; } if(browser.equals("Internet Explorer")) { BufferedWriter fW = new BufferedWriter(new FileWriter("C://logs//IElog.rtf")); blockInfo = "Blocked IE user from:" + address; response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.println("<HTML>"); out.println("<HEAD>"); out.println("<TITLE>"); out.println("This page is not available - JNetProtect"); out.println("</TITLE>"); out.println("</HEAD>"); out.println("<BODY>"); out.println("<center><H1>Error 403</H1>"); out.println("<br>"); out.println("<br>"); out.println("<H1>Access Denied</H1>"); out.println("<br>"); out.println("Sorry, that resource may not be accessed now."); out.println("<br>"); out.println("<br>"); out.println("<hr />"); out.println("<i>Page Filtered By JNetProtect</i>"); out.println("</BODY>"); out.println("</HTML>"); //init.outSide.println("Blocked and Internet Explorer user"); fW.write(blockInfo); fW.newLine(); fW.close(); } else { chain.doFilter(request, response); } } public void destroy() { outsocket.close(); outSide.close(); } public void init(FilterConfig filterConfig) { try { ServerSocket fs; Socket outsocket; PrintWriter outSide ; outsocket = new Socket("Localhost", 1337); outSide = new PrintWriter(outsocket.getOutputStream(), true); }catch (Exception e){ System.out.println("error with this connection"); e.printStackTrace();} } }

    Read the article

  • Analyzing bitmaps produced by NSAffineTransform and CILineOverlay filters

    - by Adam
    I am trying to manipulate an image using a chain of CIFilters, and then examine each byte of the resulting image (bitmap). Long term, I do not need to display the resulting image (bitmap) -- I just need to "analyze" it in memory. But near-term I am displaying it on screen, to help with debugging. I have some "bitmap examination" code that works as expected when examining the NSImage (bitmap representation) I use as my input (loaded from a JPG file into an NSImage). And it SOMETIMES works as expected when I use it on the outputBitmap produced by the code below. More specifically, when I use an NSAffineTransform filter to create outputBitmap, then outputBitmap contains the data I would expect. But if I use a CILineOverlay filter to create the outputBitmap, none of the bytes in the bitmap have any data in them. I believe both of these filters are working as expected, because when I display their results on screen (via outputImageView), they look "correct." Yet when I examine the outputBitmaps, the one created from the CILineOverlay filter is "empty" while the one created from NSAffineTransfer contains data. Furthermore, if I chain the two filters together, the final resulting bitmap only seems to contain data if I run the AffineTransform last. Seems very strange, to me??? My understanding (from reading the CI programming guide) is that the CIImage should be considered an "image recipe" rather than an actual image, because the image isn't actually created until the image is "drawn." Given that, it would make sense that the CIimage bitmap doesn't have data -- but I don't understand why it has data after I run the NSAffineTransform but doesn't have data after running the CILineOverlay transform? Basically, I am trying to determine if creating the NSCIImageRep (ir in the code below) from the CIImage (myResult) is equivalent to "drawing" the CIImage -- in other words if that should force the bitmap to be populated? If someone knows the answer to this please let me know -- it will save me a few hours of trial and error experimenting! Finally, if the answer is "you must draw to a graphics context" ... then I have another question: would I need to do something along the lines of what is described in the Quartz 2D Programming Guide: Graphics Contexts, listing 2-7 and 2-8: drawing to a bitmap graphics context? That is the path down which I am about to head ... but it seems like a lot of code just to force the bitmap data to be dumped into an array where I can get at it. So if there is an easier or better way please let me know. I just want to take the data (that should be) in myResult and put it into a bitmap array where I can access it at the byte level. And since I already have code that works with an NSBitmapImageRep, unless doing it that way is a bad idea for some reason that is not readily apparent to me, then I would prefer to "convert" myResult into an NSBitmapImageRep. CIImage * myResult = [transform valueForKey:@"outputImage"]; NSImage *outputImage; NSCIImageRep *ir = [NSCIImageRep alloc]; ir = [NSCIImageRep imageRepWithCIImage:myResult]; outputImage = [[[NSImage alloc] initWithSize: NSMakeSize(inputImage.size.width, inputImage.size.height)] autorelease]; [outputImage addRepresentation:ir]; [outputImageView setImage: outputImage]; NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: myResult]; Thanks, Adam

    Read the article

  • Spring Security and the Synchronizer Token J2EE pattern, problem when authentication fails.

    - by dfuse
    Hey, we are using Spring Security 2.0.4. We have a TransactionTokenBean which generates a unique token each POST, the bean is session scoped. The token is used for the duplicate form submission problem (and security). The TransactionTokenBean is called from a Servlet filter. Our problem is the following, after a session timeout occured, when you do a POST in the application Spring Security redirects to the logon page, saving the original request. After logging on again the TransactionTokenBean is created again, since it is session scoped, but then Spring forwards to the originally accessed url, also sending the token that was generated at that time. Since the TransactionTokenBean is created again, the tokens do not match and our filter throws an Exception. I don't quite know how to handle this elegantly, (or for that matter, I can't even fix it with a hack), any ideas? This is the code of the TransactionTokenBean: public class TransactionTokenBean implements Serializable { public static final int TOKEN_LENGTH = 8; private RandomizerBean randomizer; private transient Logger logger; private String expectedToken; public String getUniqueToken() { return expectedToken; } public void init() { resetUniqueToken(); } public final void verifyAndResetUniqueToken(String actualToken) { verifyUniqueToken(actualToken); resetUniqueToken(); } public void resetUniqueToken() { expectedToken = randomizer.getRandomString(TOKEN_LENGTH, RandomizerBean.ALPHANUMERICS); getLogger().debug("reset token to: " + expectedToken); } public void verifyUniqueToken(String actualToken) { if (getLogger().isDebugEnabled()) { getLogger().debug("verifying token. expected=" + expectedToken + ", actual=" + actualToken); } if (expectedToken == null || actualToken == null || !isValidToken(actualToken)) { throw new IllegalArgumentException("missing or invalid transaction token"); } if (!expectedToken.equals(actualToken)) { throw new InvalidTokenException(); } } private boolean isValidToken(String actualToken) { return StringUtils.isAlphanumeric(actualToken); } public void setRandomizer(RandomizerBean randomizer) { this.randomizer = randomizer; } private Logger getLogger() { if (logger == null) { logger = Logger.getLogger(TransactionTokenBean.class); } return logger; } } and this is the Servlet filter (ignore the Ajax stuff): public class SecurityFilter implements Filter { static final String AJAX_TOKEN_PARAM = "ATXTOKEN"; static final String TOKEN_PARAM = "TXTOKEN"; private WebApplicationContext webApplicationContext; private Logger logger = Logger.getLogger(SecurityFilter.class); public void init(FilterConfig config) { setWebApplicationContext(WebApplicationContextUtils.getWebApplicationContext(config.getServletContext())); } public void destroy() { } public void doFilter(ServletRequest req, ServletResponse response, FilterChain chain) throws IOException, ServletException { HttpServletRequest request = (HttpServletRequest) req; if (isPostRequest(request)) { if (isAjaxRequest(request)) { log("verifying token for AJAX request " + request.getRequestURI()); getTransactionTokenBean(true).verifyUniqueToken(request.getParameter(AJAX_TOKEN_PARAM)); } else { log("verifying and resetting token for non-AJAX request " + request.getRequestURI()); getTransactionTokenBean(false).verifyAndResetUniqueToken(request.getParameter(TOKEN_PARAM)); } } chain.doFilter(request, response); } private void log(String line) { if (logger.isDebugEnabled()) { logger.debug(line); } } private boolean isPostRequest(HttpServletRequest request) { return "POST".equals(request.getMethod().toUpperCase()); } private boolean isAjaxRequest(HttpServletRequest request) { return request.getParameter("AJAXREQUEST") != null; } private TransactionTokenBean getTransactionTokenBean(boolean ajax) { return (TransactionTokenBean) webApplicationContext.getBean(ajax ? "ajaxTransactionTokenBean" : "transactionTokenBean"); } void setWebApplicationContext(WebApplicationContext context) { this.webApplicationContext = context; } }

    Read the article

  • Losing sessions on GlassFish

    - by synti
    I have a web application that logs users in a @SessionScoped managed bean. It's all the basic stuff, pretty much like this: users logs in using regular http form and gets redirect to user area (wich is protected using a filter). But if any resource on that area is accessed, the request somehow uses a new session, wich has no managed bean, no user, and the filter does his job, redirecting him to login page. Here's the login form: <h:form> <h:outputLabel for="email" value="Email "/> <p:inputText id="email" size="30" value="#{loginManager.email}"/> <h:outputLabel for="password" value="Password "/> <p:password id="password" size="12" value="#{loginManager.password}"/> <p:commandButton value="Login" action="#{loginManager.login()}"/> </h:form> The loginManager managed bean: @ManagedBean @SessionScoped public class LoginManager implements Serializable { @EJB private UserService userService; private User user; private String email; private String password; public String login() { user = userService.findBy(email, password); if (user == null) { // FacesMessage stuff } else { return "/user/welcome.xhtml?faces-redirect=true"; } } public String logout() { FacesContext.getCurrentInstance().getExternalContext().invalidateSession(); return "/index.xhtml?faces-redirect=true"; } // Getters, setters (no setter for user) and serialVersionUID And then comes the filter that protects the user area: @WebFilter(urlPatterns="/user/*", displayName="UserFilter") public class UserFilter implements Filter { @Override public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { HttpSession session = ((HttpServletRequest)request).getSession(false); LoginManager loginManager = (LoginManager) session.getAttribute("loginManager"); if (loginManager == null || !loginManager.hasUser()) { HttpServletResponse resp = (HttpServletResponse) response; resp.sendRedirect("index.xhtml"); } final User user = loginManager.getUser(); if (user.isValid()) { chain.doFilter(request, response); } else { HttpServletResponse resp = (HttpServletResponse) response; resp.sendRedirect("index.xhtml"); } } The UserService is just a stateless EJB that handles persistence. Part of the JSF for user area: <h:form> <p:panelMenu> <p:submenu label="Items"> <p:menuitem value="Add item" action="#{userItens.addItems}" ajax="false"/> <p:menuitem value="My items" /> </p:submenu> </p:panelMenu> </h:form> And finally the userItens managed bean. @ManagedBean @RequestScoped public class UserItens { private User user; @PostConstruct private void init() { HttpSession session = (HttpSession) FacesContext.getCurrentInstance() .getExternalContext().getSession(false); LoginManager loginManager = (LoginManager) session.getAttribute("loginManager"); if (loginManager != null) user = loginManager.getUser(); } public String addItems() { // Doesn't get here. Seems like UserFilter comes first, doesn't find // an user and redirects. } I'm using glassfish and session timeout is now on 0.

    Read the article

  • Why am I getting a new session ID on every page fetch in my Perl WWW::Mechanize script?

    - by Phill Pafford
    So I'm scraping a site that I have access to via HTTPS, I can login and start the process but each time I hit a new page (URL) the cookie Session Id changes. How do I keep the logged in Cookie Session Id? #!/usr/bin/perl -w use strict; use warnings; use WWW::Mechanize; use HTTP::Cookies; use LWP::Debug qw(+); use HTTP::Request; use LWP::UserAgent; use HTTP::Request::Common; my $un = 'username'; my $pw = 'password'; my $url = 'https://subdomain.url.com/index.do'; my $agent = WWW::Mechanize->new(cookie_jar => {}, autocheck => 0); $agent->{onerror}=\&WWW::Mechanize::_warn; $agent->agent('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.3) Gecko/20100407 Ubuntu/9.10 (karmic) Firefox/3.6.3'); $agent->get($url); $agent->form_name('form'); $agent->field(username => $un); $agent->field(password => $pw); $agent->click("Log In"); print "After Login Cookie: "; print $agent->cookie_jar->as_string(); print "\n\n"; my $searchURL='https://subdomain.url.com/search.do'; $agent->get($searchURL); print "After Search Cookie: "; print $agent->cookie_jar->as_string(); print "\n"; The output: After Login Cookie: Set-Cookie3: JSESSIONID=367C6D; path="/thepath"; domain=subdomina.url.com; path_spec; secure; discard; version=0 After Search Cookie: Set-Cookie3: JSESSIONID=855402; path="/thepath"; domain=subdomain.com.com; path_spec; secure; discard; version=0 Also I think the site requires a CERT (Well in the browser it does), would this be the correct way to add it? $ENV{HTTPS_CERT_FILE} = 'SUBDOMAIN.URL.COM'; ## Insert this after the use HTTP::Request... Also for the CERT In using the first option in this list, is this correct? X.509 Certificate (PEM) X.509 Certificate with chain (PEM) X.509 Certificate (DER) X.509 Certificate (PKCS#7) X.509 Certificate with chain (PKCS#7)

    Read the article

  • JavaScript String Library - Hitting a Minor Roadblock

    - by OneNerd
    Ok - am trying to create a string library that contains a handful of useful things missing from JavaScript. Here is what I have so far: ;function $__STRING__$(in_string) { /* internal functions */ this.s = in_string; this.toString = function(){return this.s;}; /******* these functions CAN be chained (they return the $__STRING__$ object) ******/ this.uppercase = function(){this.s = this.s.toUpperCase(); return this;}; this.lowercase = function(){this.s = this.s.toLowerCase(); return this;}; this.trim = function(){this.s = this.s.replace(/^\s+|\s+$/g,""); return this;}; this.ltrim = function(){this.s = this.s.replace(/^\s+/,""); return this;}; this.rtrim = function(){this.s = this.s.replace(/\s+$/,""); return this;}; this.striptags = function(){this.s = this.s.replace(/<\/?[^>]+(>|$)/g, ""); return this;}; this.escapetags = function(){this.s = this.s.replace(/</g,"<").replace(/>/g,">"); return this;}; this.unescapetags = function(){this.s = this.s.replace(/</g,"<").replace(/>/g,">"); return this;}; this.underscorize = function(){this.s = this.s.replace(/ /g,"_"); return this;}; this.dasherize = function(){this.s = this.s.replace(/ /g,"-"); return this;}; this.spacify = function(){this.s = this.s.replace(/_/g," "); return this;}; this.left = function(length){this.s = this.s.substring(length,0); return this;}; this.right = function(length){this.s = this.s.substring(this.s.length,this.s.length-length); return this;}; this.shorten = function(length){if(this.s.length<=length){return this.s;}else{this.left(this.s,length)+"..."; return this;}}; this.mid = function(start,length){return this.s.substring(start,(length+start));}; this._down = function(){return this.s;}; // breaks chain, but lets you run core js string functions /******* these functions CANNOT be chained (they do not return the $__STRING__$ object) ******/ this.contains = function(needle){if(this.s.indexOf(needle)!==-1){return true;}else{return false;}}; this.startswith = function(needle){if(this.left(this.s,needle.length)==needle){return true;}else{return false;}}; this.endswith = function(needle){if(this.right(this.s,needle.length)==needle){return true;}else{return false;};}; } function $E(in_string){return new $__STRING__$(in_string);} String.prototype._enhance = function(){return new $__STRING__$(this);}; String.prototype._up = function(){return new $__STRING__$(this);}; It works fairly well, and I can chain commands etc. I set it up so I can cast a string as an enhanced string these 2 ways: $E('some string'); 'some string'._enhance(); However, each time I want to use a built-in string method, I need to convert it back to a string first. So for now, I put in _down() and _up() methods like so: alert( $E("hello man").uppercase()._down().replace("N", "Y")._up().dasherize() ); alert( "hello man"._enhance().uppercase()._down().replace("N", "Y")._up().dasherize() ); It works fine, but what I really want to do it be able to use all of the built-in functions a string can use. I realize I can just replicate each function inside my object, but I was hoping there was a simpler way. So question is, is there an easy way to do that? Thanks -

    Read the article

  • Applying Unity in dynamic menu

    - by Rajarshi
    I was going through Unity 2.0 to check if it has an effective use in our new application. My application is a Windows Forms application and uses a traditional bar menu (at the top), currently. My UIs (Windows Forms) more or less support Dependency Injection pattern since they all work with a class (Presentation Model Class) supplied to them via the constructor. The form then binds to the properties of the supplied P Model class and calls methods on the P Model class to perform its duties. Pretty simple and straightforward. How P Model reacts to the UI actions and responds to them by co-ordinating with the Domain Class (Business Logic/Model) is irrelevant here and thus not mentioned. The object creation sequence to show up one UI from menu then goes like this - Create Business Model instance Create Presentation Model instance with Business Model instance passed to P Model constructor. Create UI instance with Presentation Model instance passed to UI constructor. My present solution: To show an UI in the method above from my menu I would have to refer all assemblies (Business, PModel, UI) from my Menu class. Considering I have split the modules into a number of physical assemblies, that would be a dificult task to add references to about 60 different assemblies. Also the approach is not very scalable since I would certainly need to release more modules and with this approach I would have to change the source code every time I release a new module. So primarily to avoid the reference of so many assemblies from my Menu class (assembly) I did as below - Stored all the dependency described above in a database table (SQL Server), e.g. ModuleShortCode | BModelAssembly | BModelFullTypeName | PModelAssembly | PModelFullTypeName | UIAssembly | UIFullTypeName Now used a static class named "Launcher" with a method "Launch" as below - Launcher.Launch("Discount") Launcher.Launch("Customers") The Launcher internally uses data from the dependency table and uses Activator.CreateInstance() to create each of the objects and uses the instance as constructor parameter to the next object being created, till the UI is built. The UI is then shown as a modal dialog. The code inside Launcher is somewhat like - Form frm = ResolveForm("Discount"); frm.ShowDialog(); The ResolveForm does the trick of building the chain of objects. Can Unity help me here? Now when I did that I did not have enough information on Unity and now that I have studied Unity I think I have been doing more or less the same thing. So I tried to replace my code with Unity. However, as soon as I started I hit a block. If I try to resolve UI forms in my Menu as Form customers = myUnityContainer.Resolve(); or Form customers = myUnityContainer.Resolve(typeof(Customers)); Then either way, I need to refer to my UI assembly from my Menu assembly since the target Type "Customers" need to be known for Unity to resolve it. So I am back to same place since I would have to refer all UI assemblies from the Menu assembly. I understand that with Unity I would have to refer fewer assemblies (only UI assemblies) but those references are needed which defeats my objectives below - Create the chain of objects dynamically without any assembly reference from Menu assembly. This is to avoid Menu source code changing every time I release a new module. My Menu also is built dynamically from a table. Be able to supply new modules just by supplying the new assemblies and inserting the new Dependency row in the table by a database patch. At this stage, I have a feeling that I have to do it the way I was doing, i.e. Activator.CreateInstance() to fulfil all my objectives. I need to verify whether the community thinks the same way as me or have a better suggestion to solve the problem. The post is really long and I sincerely thank you if you come til this point. Waiting for your valuable suggestions. Rajarshi

    Read the article

  • Sharing a texture resource from DX11 to DX9 to WPF, need to wait for DeviceContext.Flush() to finish

    - by Rei Miyasaka
    I'm following these instructions on TheCodeProject for rendering from DirectX to WPF using D3DImage. The trouble is that now that I have no swap chain to call Present() on -- which according to the article shouldn't be a problem, but it definitely wasn't copying my back buffer. An additional step that I have to take before I can copy the texture to WPF is to share it with a second D3D9Ex device, since D3DImage only works with DX9 (which is understandable, as WPF is built on DX9). To that end, I've modified some SlimDX code to work with DirectX 11. I tried calling DeviceContext.Flush() (the Immediate one) at the end of each render cycle, which kind of works -- most of the time it'll show my renderings, but maybe for maybe 3 or 4 out of 60 frames each second, it'll draw my clear color instead. This makes sense -- Flush() is non-blocking; it doesn't wait for the GPU to do its thing the way SwapChain.Present does. Any idea what the proper solution is? I have a feeling it has something to do with my texture parameters for the back buffer, but I don't know.

    Read the article

  • Parallelism in .NET – Part 7, Some Differences between PLINQ and LINQ to Objects

    - by Reed
    In my previous post on Declarative Data Parallelism, I mentioned that PLINQ extends LINQ to Objects to support parallel operations.  Although nearly all of the same operations are supported, there are some differences between PLINQ and LINQ to Objects.  By introducing Parallelism to our declarative model, we add some extra complexity.  This, in turn, adds some extra requirements that must be addressed. In order to illustrate the main differences, and why they exist, let’s begin by discussing some differences in how the two technologies operate, and look at the underlying types involved in LINQ to Objects and PLINQ . LINQ to Objects is mainly built upon a single class: Enumerable.  The Enumerable class is a static class that defines a large set of extension methods, nearly all of which work upon an IEnumerable<T>.  Many of these methods return a new IEnumerable<T>, allowing the methods to be chained together into a fluent style interface.  This is what allows us to write statements that chain together, and lead to the nice declarative programming model of LINQ: double min = collection .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Other LINQ variants work in a similar fashion.  For example, most data-oriented LINQ providers are built upon an implementation of IQueryable<T>, which allows the database provider to turn a LINQ statement into an underlying SQL query, to be performed directly on the remote database. PLINQ is similar, but instead of being built upon the Enumerable class, most of PLINQ is built upon a new static class: ParallelEnumerable.  When using PLINQ, you typically begin with any collection which implements IEnumerable<T>, and convert it to a new type using an extension method defined on ParallelEnumerable: AsParallel().  This method takes any IEnumerable<T>, and converts it into a ParallelQuery<T>, the core class for PLINQ.  There is a similar ParallelQuery class for working with non-generic IEnumerable implementations. This brings us to our first subtle, but important difference between PLINQ and LINQ – PLINQ always works upon specific types, which must be explicitly created. Typically, the type you’ll use with PLINQ is ParallelQuery<T>, but it can sometimes be a ParallelQuery or an OrderedParallelQuery<T>.  Instead of dealing with an interface, implemented by an unknown class, we’re dealing with a specific class type.  This works seamlessly from a usage standpoint – ParallelQuery<T> implements IEnumerable<T>, so you can always “switch back” to an IEnumerable<T>.  The difference only arises at the beginning of our parallelization.  When we’re using LINQ, and we want to process a normal collection via PLINQ, we need to explicitly convert the collection into a ParallelQuery<T> by calling AsParallel().  There is an important consideration here – AsParallel() does not need to be called on your specific collection, but rather any IEnumerable<T>.  This allows you to place it anywhere in the chain of methods involved in a LINQ statement, not just at the beginning.  This can be useful if you have an operation which will not parallelize well or is not thread safe.  For example, the following is perfectly valid, and similar to our previous examples: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); However, if SomeOperation() is not thread safe, we could just as easily do: double min = collection .Select(item => item.SomeOperation()) .AsParallel() .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); In this case, we’re using standard LINQ to Objects for the Select(…) method, then converting the results of that map routine to a ParallelQuery<T>, and processing our filter (the Where method) and our aggregation (the Min method) in parallel. PLINQ also provides us with a way to convert a ParallelQuery<T> back into a standard IEnumerable<T>, forcing sequential processing via standard LINQ to Objects.  If SomeOperation() was thread-safe, but PerformComputation() was not thread-safe, we would need to handle this by using the AsEnumerable() method: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .AsEnumerable() .Min(item => item.PerformComputation()); Here, we’re converting our collection into a ParallelQuery<T>, doing our map operation (the Select(…) method) and our filtering in parallel, then converting the collection back into a standard IEnumerable<T>, which causes our aggregation via Min() to be performed sequentially. This could also be written as two statements, as well, which would allow us to use the language integrated syntax for the first portion: var tempCollection = from item in collection.AsParallel() let e = item.SomeOperation() where (e.SomeProperty > 6 && e.SomeProperty < 24) select e; double min = tempCollection.AsEnumerable().Min(item => item.PerformComputation()); This allows us to use the standard LINQ style language integrated query syntax, but control whether it’s performed in parallel or serial by adding AsParallel() and AsEnumerable() appropriately. The second important difference between PLINQ and LINQ deals with order preservation.  PLINQ, by default, does not preserve the order of of source collection. This is by design.  In order to process a collection in parallel, the system needs to naturally deal with multiple elements at the same time.  Maintaining the original ordering of the sequence adds overhead, which is, in many cases, unnecessary.  Therefore, by default, the system is allowed to completely change the order of your sequence during processing.  If you are doing a standard query operation, this is usually not an issue.  However, there are times when keeping a specific ordering in place is important.  If this is required, you can explicitly request the ordering be preserved throughout all operations done on a ParallelQuery<T> by using the AsOrdered() extension method.  This will cause our sequence ordering to be preserved. For example, suppose we wanted to take a collection, perform an expensive operation which converts it to a new type, and display the first 100 elements.  In LINQ to Objects, our code might look something like: // Using IEnumerable<SourceClass> collection IEnumerable<ResultClass> results = collection .Select(e => e.CreateResult()) .Take(100); If we just converted this to a parallel query naively, like so: IEnumerable<ResultClass> results = collection .AsParallel() .Select(e => e.CreateResult()) .Take(100); We could very easily get a very different, and non-reproducable, set of results, since the ordering of elements in the input collection is not preserved.  To get the same results as our original query, we need to use: IEnumerable<ResultClass> results = collection .AsParallel() .AsOrdered() .Select(e => e.CreateResult()) .Take(100); This requests that PLINQ process our sequence in a way that verifies that our resulting collection is ordered as if it were processed serially.  This will cause our query to run slower, since there is overhead involved in maintaining the ordering.  However, in this case, it is required, since the ordering is required for correctness. PLINQ is incredibly useful.  It allows us to easily take nearly any LINQ to Objects query and run it in parallel, using the same methods and syntax we’ve used previously.  There are some important differences in operation that must be considered, however – it is not a free pass to parallelize everything.  When using PLINQ in order to parallelize your routines declaratively, the same guideline I mentioned before still applies: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Upgrading log shipping from 2005 to 2008 or 2008R2

    - by DavidWimbush
    If you're using log shipping you need to be aware of some small print. The general idea is to upgrade the secondary server first and then the primary server because you can continue to log ship from 2005 to 2008R2. But this won't work if you're keeping your secondary databases in STANDBY mode rather than IN RECOVERY. If you're using native log shipping you'll have some work to do. If you've rolled your own log shipping (ahem) you can convert a STANDBY database to IN RECOVERY like this:   restore database [dw]   with norecovery; and then change your restore code to use WITH NORECOVERY instead of WITH STANDBY. (Finally all that aggravation pays off!) You can either upgrade the secondary server in place or rebuild it. A secondary database doesn't actually get upgraded until you recover it so the log sequence chain is not broken and you can continue shipping from the primary. Just remember that it can take quite some time to upgrade a database so you need to factor that into the expectations you give people about how long it will take to fail over. For more details, check this out: http://msdn.microsoft.com/en-us/library/cc645954(SQL.105).aspx

    Read the article

  • Reminder: Premier Support for 10gR2 10.2.0.4 Database ends July 2010

    - by Steven Chan
    Regular readers know that Premier Support for the Oracle 10gR2 Database ends in July 2010, a scant few months from now.  What does that mean for E-Business Suite environments running on this database?The Oracle E-Business Suite is comprised of products like Financials, Supply Chain, Procurement, and so on.  Support windows for the E-Business Suite and these associated applications products are listed here:Oracle Lifetime Support > "Lifetime Support Policy: Oracle Applications" (PDF)The Oracle E-Business Suite can run on a variety of database releases, including 10gR2, 11gR1, and 11gR2.  Support windows for database releases are listed here:Oracle Lifetime Support > "Lifetime Support Policy: Oracle Technology Products" (PDF)Looking at those two documents together, you'll see that:Premier Support for Oracle E-Business Suite Release 11i ends on November 30, 2010Premier Support for Oracle E-Business Suite Release 12 ends on January 31, 2012Premier Support for Oracle E-Business Suite Release 12.1 ends on May 31, 2014Premier Support for Oracle Database 10.2 (a.k.a. 10gR2) ends on July 31, 2010[Note: These are the Premier Support dates as of today.  If you've arrived at this article in the future via a search engine, you must check the latest dates in the Lifetime Support Policy documents above; these dates are subject to change.]It's a bit hard to read, thanks to the layout restrictions of this blog, but the following diagram shows the Premier and Extended Support windows for the last four major database releases certified with Apps 11i:Do the EBS Premier Support dates trump the 10gR2 DB date?No.  Each of the support policies apply individually to your combined EBS + DB configuration.  The support dates for a given EBS release don't override the Database support policy.

    Read the article

  • Content Query Web Part - How do you OrderBy when you QueryOverride?

    - by Richard JP Le Guen
    How do you order items when you override the QueryOverride property of the Content Query Web Part? I have been given responsibility for a Web Part which extends the Content Query Web Part. The QueryOverride property of this Web Part is programmatically changed. Currently, the Web Part does not function as designed, as it does not order the items according to the appropriate field. If I add an <OrderBy> node to the QueryOverride property I get an error message along the lines of 'something wrong with the query this web part is...' and the Content Query Web Part doesn't seem to have an OrderBy property which I could use instead. The "QueryOverride property" part of this msdn article seems to suggest I should be able to add an <OrderBy> node to the QueryOverride but a number of web sites I've been reading suggest that this is not true. So, wow do you order items when you override the QueryOverride property of the Content Query Web Part?

    Read the article

  • David Cameron addresses - The Oracle Retail Week Awards 2012

    - by user801960
    The Oracle Retail Week Awards 2012 were last night. In case you missed the action the introduction video for the Oracle Retail Week Awards 2012 is below, featuring interviews with UK Prime Minister David Cameron, Acting Editor of Retail Week George MacDonald, the judges for the awards and key figureheads in British retail. Check back on the blog in the next couple of days for more videos, interviews and insights from the awards. Oracle Retail and "Your Experience Platform" Technology is the key to providing that differentiated retail experience. More specifically, it is what we at Oracle call ‘the experience platform’ - a set of integrated, cross-channel business technology solutions, selected and operated by a retail business and IT team, and deployed in accordance with that organisation’s individual strategy and processes. This business systems architecture simultaneously: Connects customer interactions across all channels and touchpoints, and every customer lifecycle phase to provide a differentiated customer experience that meets consumers’ needs and expectations. Delivers actionable insight that enables smarter decisions in planning, forecasting, merchandising, supply chain management, marketing, etc; Optimises operations to align every aspect of the retail business to gain efficiencies and economies, to align KPIs to eliminate strategic conflicts, and at the same time be working in support of customer priorities.   Working in unison, these three goals not only help retailers to successfully navigate the challenges of today (identified in the previous session on this stage) but also to focus on delivering that personalised customer experience based on differentiated products, pricing, services and interactions that will help you to gain market share and grow sales.

    Read the article

  • Are You "INFOCUS"? We are!

    - by user709270
    The JD Edwards team is looking forward to participating in JD Edwards INFOCUS, the inaugural JD Edwards EnterpriseOne deep dive conference from Quest International Users Group. We've worked diligently with the leadership of Quest’s JD Edwards Special Interest Groups (SIGs) and Regional User Groups (RUGs) to make sure this national user event delivers JD Edwards content that meets the needs of the community. Plus, this event is being held right in JD Edwards’ backyard… Denver (Broomfield), Colorado! JD Edwards INFOCUS will be held November 7-9 at the Omni Interlocken Resort. Through our Product Strategy, Development and Support teams, Oracle will provide support for education sessions in these key tracks: · HCM · Financials · Manufacturing and Distribution · Real Estate Industry Forum · Supply Chain · Tools & Technology Oracle will host a JD Edwards EnterpriseOne Support demo booth to showcase many of the new capabilities available to you plus best practice approaches with existing capabilities, all to enhance your support experience. Oracle is also hosting a classroom-based Upgrades Workshop to explore methodology for a complete JD Edwards EnterpriseOne ERP software upgrade project. Space is limited so pre-register at QuestDirect.org/INFOCUS by adding the workshop to your agenda using the Agenda Builder on the Education tab. Finally, participate in one of the many enhancement discussions for key JD Edwards solutions at INFOCUS and contribute to the future of  JD Edwards through an interactive forum.  All of this is part of the 140+ education sessions being offered by the customer and vendor community.   There’s a lot of buzz around this conference, so don’t delay in registering key members of your team today.  We look forward to seeing you there so register NOW!

    Read the article

  • You Might Be a SharePoint Professional If&hellip;

    - by Mark Rackley
    I really think no explanation is needed. Hope this makes you smile.. Thanks again for being an awesome SharePoint community! If you can only dream about working an 8 hour day, there’s a good chance you are a SharePoint professional. You might be a SharePoint professional if the last time you heard “Old MacDonald Had a Farm” you wondered “How many web front ends does it have?” If you consider Twitter the best form of support since the dawn of the Internet, you might be a SharePoint professional. If you are giddy-as-a-school-girl excited about going to Anaheim in October and it has NOTHING to do with Disneyland, you might be a SharePoint professional. You might be a SharePoint professional if you own more SharePoint shirts than you do pairs of underwear. If you’ve thought of giving up a career in the IT world for a job taking orders at a fast food chain, you might be a SharePoint professional. You might be a SharePoint professional if the only people who understand the words that come out of your mouth are other SharePoint people. If you put the word “Share” or “SP” in front of EVERYTHING (ShareFood, SPRunner, etc… etc…) then you might be a SharePoint professional. You are probably a SharePoint professional if you love SharePoint.. you hate SharePoint… you love SharePoint… you hate SharePoint… If the only thing you’d rather do more than SharePoint is SharePint, then you are definitely a SharePoint professional. You might be a SharePoint professional if your idea of name dropping is “Andrew Connell says…” or “According to Todd Klindt”… or even “Well, when I was stuck in a Turkish prison with Joel Oleson…”

    Read the article

  • Should we always prefer OpenGL ES version 2 over version 1.x

    - by Shivan Dragon
    OpengGL ES version 2 goes a long way into changing the development paradigm that was established with OpenGL ES 1.x. You have shaders which you can chain together to apply varios effects/transforms to your elements, the projection and transformation matrices work completly different etc. I've seen a lot of online tutorials and blogs that simply say "ditch version 1.x, use version 2, that's the way to go". Even on Android's documentation it sais to "use version 2 as it may prove faster than 1.x". Now, I've also read a book on OpenGL ES (which was rather good, but I'm not gonna mention here because I don't want to give the impression that I'm trying to make hidden publicity). The guy there treated only OpenGL ES 1.x for 80% of the book, and then at the end only listed the differences in version 2 and said something like "if OpenGL ES 1 does what you need, there's no need to switch to version 2, as it's only gonna over complicate your code. Version 2 was changed a lot to facillitate newer, fancier stuff, but if you don't need it, version 1.x is fine". My question is then, is the last statement right? Should I always use Open GL ES version 1.x if I don't need version 2 only stuff? I'd sure like to do that, because I find coding in version 1.x A LOT simpler than version 2 but I'm afraid that my apps might get obsolete faster for using an older version.

    Read the article

  • Oracle’s AutoVue Enables Visual Decision Making

    - by Pam Petropoulos
    That old saying about a picture being worth a thousand words has never been truer.  Check out the latest reports from IDC Manufacturing Insights which highlight the importance of incorporating visual information in all facets of decision making and the role that Oracle’s AutoVue Enterprise Visualization solutions can play. Take a look at the excerpts below and be sure to click on the titles to read the full reports. Technology Spotlight: Optimizing the Product Life Cycle Through Visual Decision Making, August 2012 Manufacturers find it increasingly challenging to make effective product-related decisions as the result of expanded technical complexities, elongated supply chains, and a shortage of experienced workers. These factors challenge the traditional methodologies companies use to make critical decisions. However, companies can improve decision making by the use of visual decision making, which synthesizes information from multiple sources into highly usable visual context and integrates it with existing enterprise applications such as PLM and ERP systems. Product-related information presented in a visual form and shared across communities of practice with diverse roles, backgrounds, and job skills helps level the playing field for collaboration across business functions, technologies, and enterprises. Visual decision making can contribute to manufacturers making more effective product-related decisions throughout the complete product life cycle. This Technology Spotlight examines these trends and the role that Oracle's AutoVue and its Augmented Business Visualization (ABV) solution play in this strategic market. Analyst Connection: Using Visual Decision Making to Optimize Manufacturing Design and Development, September 2012 In today's environments, global manufacturers are managing a broad range of information. Data is often scattered across countless files throughout the product life cycle, generated by different applications and platforms. Organizations are struggling to utilize these multidisciplinary sources in an optimal way. Visual decision making is a strategy and technology that can address this challenge by integrating and widening access to digital information assets. Integrating with PLM and ERP tools across engineering, manufacturing, sales, and marketing, visual decision making makes digital content more accessible to employees and partners in the supply chain. The use of visual decision-making information rendered in the appropriate business context and shared across functional teams contributes to more effective product-related decision making and positively impacts business performance.

    Read the article

  • 3 Key Trends For Mobile Commerce – Location, Location, Location

    - by Michael Hylton
    This past weekend I was at a major bookstore chain and looking for a particular book.  Rather than ask the clerk, I went to my smartphone and went online to find the book title, author, and competing price.  I know I’m not alone in this effort and more and more individuals (and businesses) will use the power of mobility to tilt the scale in their favor. Armed with a mobile device – smartphone or tablet – folks will use them to research, compare, and ultimately purchase.  A recent PayPal survey found that 46% of respondents plan to use a mobile device this holiday season to make a purchase.   An astounding 27% of consumers in an e-tailing group survey commissioned by Oracle, use a tablet device daily or several times a week to research products and services. Beyond researching or making purchases, 35% of consumers use their smartphone to receive offers and coupons, and 32% access coupons and redeem them at their local retail store.  And with GPS capabilities in smartphones and tablet (and with user’s approval), retailers will start pushing coupons and offers directly to phone users based on their proximity to their store (or their competitors). Security is one concern that both shoppers, companies and phone manufacturers will have to deal with in the coming years.  In that same Oracle-sponsored e-tailing group consumer survey, 32% of consumers were concerned about giving their credit card information via a smartphone. You can gain further insight into the mind of today’s consumer by reading the e-tailing group white paper, titled “the connected consumer”.

    Read the article

  • 3 Key Trends For Mobile Commerce – Location, Location, Location

    - by Michael Hylton
    This past weekend I was at a major bookstore chain and looking for a particular book.  Rather than ask the clerk, I went to my smartphone and went online to find the book title, author, and competing price.  I know I’m not alone in this effort and more and more individuals (and businesses) will use the power of mobility to tilt the scale in their favor. Armed with a mobile device – smartphone or tablet – folks will use them to research, compare, and ultimately purchase.  A recent PayPal survey found that 46% of respondents plan to use a mobile device this holiday season to make a purchase.   An astounding 27% of consumers in an e-tailing group survey commissioned by Oracle, use a tablet device daily or several times a week to research products and services. Beyond researching or making purchases, 35% of consumers use their smartphone to receive offers and coupons, and 32% access coupons and redeem them at their local retail store.  And with GPS capabilities in smartphones and tablet (and with user’s approval), retailers will start pushing coupons and offers directly to phone users based on their proximity to their store (or their competitors). Security is one concern that both shoppers, companies and phone manufacturers will have to deal with in the coming years.  In that same Oracle-sponsored e-tailing group consumer survey, 32% of consumers were concerned about giving their credit card information via a smartphone. You can gain further insight into the mind of today’s consumer by reading the e-tailing group white paper, titled “the connected consumer”.

    Read the article

  • How I understood monads, part 1/2: sleepless and self-loathing in Seattle

    - by Bertrand Le Roy
    For some time now, I had been noticing some interest for monads, mostly in the form of unintelligible (to me) blog posts and comments saying “oh, yeah, that’s a monad” about random stuff as if it were absolutely obvious and if I didn’t know what they were talking about, I was probably an uneducated idiot, ignorant about the simplest and most fundamental concepts of functional programming. Fair enough, I am pretty much exactly that. Being the kind of guy who can spend eight years in college just to understand a few interesting concepts about the universe, I had to check it out and try to understand monads so that I too can say “oh, yeah, that’s a monad”. Man, was I hit hard in the face with the limitations of my own abstract thinking abilities. All the articles I could find about the subject seemed to be vaguely understandable at first but very quickly overloaded the very few concept slots I have available in my brain. They also seemed to be consistently using arcane notation that I was entirely unfamiliar with. It finally all clicked together one Friday afternoon during the team’s beer symposium when Louis was patient enough to break it down for me in a language I could understand (C#). I don’t know if being intoxicated helped. Feel free to read this with or without a drink in hand. So here it is in a nutshell: a monad allows you to manipulate stuff in interesting ways. Oh, OK, you might say. Yeah. Exactly. Let’s start with a trivial case: public static class Trivial { public static TResult Execute<T, TResult>( this T argument, Func<T, TResult> operation) { return operation(argument); } } This is not a monad. I removed most concepts here to start with something very simple. There is only one concept here: the idea of executing an operation on an object. This is of course trivial and it would actually be simpler to just apply that operation directly on the object. But please bear with me, this is our first baby step. Here’s how you use that thing: "some string" .Execute(s => s + " processed by trivial proto-monad.") .Execute(s => s + " And it's chainable!"); What we’re doing here is analogous to having an assembly chain in a factory: you can feed it raw material (the string here) and a number of machines that each implement a step in the manufacturing process and you can start building stuff. The Trivial class here represents the empty assembly chain, the conveyor belt if you will, but it doesn’t care what kind of raw material gets in, what gets out or what each machine is doing. It is pure process. A real monad will need a couple of additional concepts. Let’s say the conveyor belt needs the material to be processed to be contained in standardized boxes, just so that it can safely and efficiently be transported from machine to machine or so that tracking information can be attached to it. Each machine knows how to treat raw material or partly processed material, but it doesn’t know how to treat the boxes so the conveyor belt will have to extract the material from the box before feeding it into each machine, and it will have to box it back afterwards. This conveyor belt with boxes is essentially what a monad is. It has one method to box stuff, one to extract stuff from its box and one to feed stuff into a machine. So let’s reformulate the previous example but this time with the boxes, which will do nothing for the moment except containing stuff. public class Identity<T> { public Identity(T value) { Value = value; } public T Value { get; private set;} public static Identity<T> Unit(T value) { return new Identity<T>(value); } public static Identity<U> Bind<U>( Identity<T> argument, Func<T, Identity<U>> operation) { return operation(argument.Value); } } Now this is a true to the definition Monad, including the weird naming of the methods. It is the simplest monad, called the identity monad and of course it does nothing useful. Here’s how you use it: Identity<string>.Bind( Identity<string>.Unit("some string"), s => Identity<string>.Unit( s + " was processed by identity monad.")).Value That of course is seriously ugly. Note that the operation is responsible for re-boxing its result. That is a part of strict monads that I don’t quite get and I’ll take the liberty to lift that strange constraint in the next examples. To make this more readable and easier to use, let’s build a few extension methods: public static class IdentityExtensions { public static Identity<T> ToIdentity<T>(this T value) { return new Identity<T>(value); } public static Identity<U> Bind<T, U>( this Identity<T> argument, Func<T, U> operation) { return operation(argument.Value).ToIdentity(); } } With those, we can rewrite our code as follows: "some string".ToIdentity() .Bind(s => s + " was processed by monad extensions.") .Bind(s => s + " And it's chainable...") .Value; This is considerably simpler but still retains the qualities of a monad. But it is still pointless. Let’s look at a more useful example, the state monad, which is basically a monad where the boxes have a label. It’s useful to perform operations on arbitrary objects that have been enriched with an attached state object. public class Stateful<TValue, TState> { public Stateful(TValue value, TState state) { Value = value; State = state; } public TValue Value { get; private set; } public TState State { get; set; } } public static class StateExtensions { public static Stateful<TValue, TState> ToStateful<TValue, TState>( this TValue value, TState state) { return new Stateful<TValue, TState>(value, state); } public static Stateful<TResult, TState> Execute<TValue, TState, TResult>( this Stateful<TValue, TState> argument, Func<TValue, TResult> operation) { return operation(argument.Value) .ToStateful(argument.State); } } You can get a stateful version of any object by calling the ToStateful extension method, passing the state object in. You can then execute ordinary operations on the values while retaining the state: var statefulInt = 3.ToStateful("This is the state"); var processedStatefulInt = statefulInt .Execute(i => ++i) .Execute(i => i * 10) .Execute(i => i + 2); Console.WriteLine("Value: {0}; state: {1}", processedStatefulInt.Value, processedStatefulInt.State); This monad differs from the identity by enriching the boxes. There is another way to give value to the monad, which is to enrich the processing. An example of that is the writer monad, which can be typically used to log the operations that are being performed by the monad. Of course, the richest monads enrich both the boxes and the processing. That’s all for today. I hope with this you won’t have to go through the same process that I did to understand monads and that you haven’t gone into concept overload like I did. Next time, we’ll examine some examples that you already know but we will shine the monadic light, hopefully illuminating them in a whole new way. Realizing that this pattern is actually in many places but mostly unnoticed is what will enable the truly casual “oh, yes, that’s a monad” comments. Here’s the code for this article: http://weblogs.asp.net/blogs/bleroy/Samples/Monads.zip The Wikipedia article on monads: http://en.wikipedia.org/wiki/Monads_in_functional_programming This article was invaluable for me in understanding how to express the canonical monads in C# (interesting Linq stuff in there): http://blogs.msdn.com/b/wesdyer/archive/2008/01/11/the-marvels-of-monads.aspx

    Read the article

  • If some standards apply when "it depends" then should I stick with custom approaches?

    - by Travis J
    If I have an unconventional approach which works better than the industry standard, should I just stick with it even though in principal it violates those standards? What I am talking about is referential integrity for relational database management systems. The standard for enforcing referential integrity is to CASCADE delete. In practice, this is just not going to work all the time. In my current case, it does not. The alternative suggested is to either change the reference to NULL, DEFAULT, or just to take NO ACTION - usually in the form of a "soft delete". I am all about enforcing referential integrity. Love it. However, sometimes it just does not fully apply to use all the standards in practice. My approach has been to slightly abandon a small part of one of those practices which is the part about leaving "hanging references" around. Oops. The trade off is plentiful in this situation I believe. Instead of having deprecated data in the production database, a splattering of "soft delete" logic all across my controllers (and views sometimes depending on how far down the chain the soft delete occurred), and the prospect of queries taking longer and longer - instead of all that - I now have a recycle bin and centralized logic. The only tradeoff is that I must explicitly manage the possibility of "hanging references" which can be done through generics with one class. Any thoughts?

    Read the article

  • Is ACE reactor timer managment thread safe?

    - by idimba
    I have a module that manages timers in my aplication. This class has basibly three functions: Instance of ACE_Reactor is used internally by the module to manage the timers. schedule timer - calls ACE_Reactor::schedule_timer(). One of the arguments is a callback, called upon timer experation. cancel timer - calls ACE_Reactor::cancel_timer() The reactor executed in private timer of execution, so schedule/cancel and timeout callback are executed in different threads. ACE_Reactor::schedule_timer() receives a heap allocatec structure ( arg argument). This structure later deleted when canceling timer or when timeout handler is called. But since cancel and timeout handler are executed in different threads it looks like there's cases that the structure is deleted twice. Isn't it responsibility of reactor to ensure that timer is canceled when timeout handler is called?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >