Search Results

Search found 2824 results on 113 pages for 'supply chain'.

Page 62/113 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Move email off Small Business Server to Google Apps, retain other SBS functions?

    - by Paul S.
    Recently, an in-house Microsoft Small Business Server 2011 was installed where I work. Unfortunately, our buildings have a bad electrical power supply and we suffer frequent outages. We have a large percentage of staff working off-site. Now when the power goes off here, everyone everywhere loses email functionality. I have been assigned to research the possibility of routing our email to Google Apps while maintaining LAN functions on the SBS. I haven't worked with Microsoft products for several years now, so do not know how SBS is structured. Can anyone here tell me if this is possible, or point me to good resources that explain our options?

    Read the article

  • I suspect that my HDD is causing hardlocks, as all other components have been replaced. How can I check this theory and solve the potential cause?

    - by user867814
    I have had this problem over quite a while now, thorough multiple Linux kernel versions and distributions, as well as replacement of all components, aside from my main HDD - RAM, GPU(twice), mother board, CPU, power supply. What happens is, at one point during the operation of the PC, it will hardlock - everything stops working, external HDD is not shut down correctly and continues to spin until I plug it out and in, there are no system/kernel logs of any kind, and no otherwise nothing that would suggest a cause. Another reason for my suspicion is that the failures happen almost exclusively during HDD read/write activity - shutdown(happens nearly 1/3 of the time so far, it's only been few days though), launching programs, and once during operation of apt. I hope the post is descriptive enough, if you need any additional info, ask(and tell me how to prepare/obtain it), and I will provide. If I'm wrong, point me in the right direction. Thanks in advance.

    Read the article

  • Best Practice for upgrading PHP On Production Systems

    - by Demic
    We Have two load balanced web servers running php 5.3. I've been asked by our dev team to upgrade php to 5.4 because they need certain functionality it will bring. The main issue is that 5.3 is the latest thats been built into the distros repository, so to upgrade using the package manager, Ill need to add another 3rd party repo. I dont have a problem with this per se, but Im concerned about using a package from a "non official" source. The other option is to compile php from source, but I guess this will prevent me from using the package manager to upgrade at any stage in the future? So I guess Im just looking for some guidance on which way to go. Compile from source or install from any old repo that purports to supply php 5.4? Or perhaps theres a third option I havent considered? Thanks in advance Demic

    Read the article

  • Network Service Account not Inherited in ACL

    - by 5lovak
    I have a problem with files that are being moved into a folder that is set to replace permissions on child objects for the Network Service account. The process is that a media file is uploaded to a website, and is encoded by a piece of software. This moves the file to a folder but for some reason the files that get moved there don't inherit the Network Service account in security permissions. If I manually move a file into the folder the permission is inherited. I have used the effective permissions tool to check the Network Service account security permissions on the parent folder but this shows that there is nothing overriding it - the account has full permissions. Can try and supply more info if required, but any answers grealty appreciated!

    Read the article

  • Why does using 2 memory sticks cause my computer to crash?

    - by hi
    My computer randomly crashes when playing games, but if I remove one memory stick (it does not matter which one I remove), it does not crash anymore. Memory tests do not find errors, I just put in a new power supply (650W), I only have 1 graphics card, so why is this happening? BTW, they are the same memory, same vendor same specs, everything I bought it together (2x2GB) My motherboard is a Asus P5Q Pro, so it supports both dual channel and more than 4gb. Switching slots does nothing, as long as I don't use more than 1 I'm fine.

    Read the article

  • My Computer will not turn on?

    - by user269120
    I recently built a gaming pc. It was working fine before the following events: I installed a graphics card driver update then: in windows it said that the user experience index needed to be refreshed, so i started the test and somewhere in the middle of the test my pc just switched off. No shutting down just stopped. Now it won't turn back on. I have checked its plugged in and the switch on psu is down, i tried a different power cable and i checked all the connections. When i press the power button nothing happens, no fans no lights no post beep. Computer Parts: Motherboard: Gigabyte GA78LMT-USB3 CPU: AMD FX-6350 @ 3.9 Ghz RAM: 2x4gb Crucial Ballistix Sport Power Supply: Tesla 750w psu Graphics card: XFX Radeon 7870 DD Case: CiT Vantage R Gaming Case Hard Drive: 2TB Western Digital Caviar Green Please help me, this computer is only a week old since i built it. All anwsers are appreciated :)

    Read the article

  • Trouble getting OS fingerprinting to work in iptables

    - by user1197457
    Everyone, As I understand it, OSF has been merged with the Kernel since 2.6.before-my-kernel-version. Yet when I do something like this: iptables -I INPUT -j ACCEPT -p tcp -m osf --genre Linux --log 0 --ttl 2 and I get an error like: iptables: No chain/target/match by that name iptables -L Shows no rules because I did an iptables -F at one point. ALSO, the following command: cat /proc/net/ip_tables_matches Does not show "osf" on the list. A google doesn't seem to help. I've also installed iptables-devel in hopes I'd be able to load the osf module. Sadly I haven't been able to get that to work. Centos 6.4 minimal Any guidance?

    Read the article

  • Computer won't start when connecting SATA HDD

    - by vlad
    I just bough a new HDD some time ago and recently i bought another SATA cable to have both HDDs connected at the same time, to transfer all the files from the older one to the newer. When i connected both of them, the computer started working, i tapped F2 to go to bios to make sure it was detected, and then after 5-10 seconds the computer instantly shut down. After this incident, my computer won't start at all when connecting the SATA power supply to the old HDD. If i connect only my new drive, it works without any problem. If i connect only my old drive, or both the old drive and the new one, and then push on the power button, the cpu fan simply rotates about 5 degrees and then stops.. and that's it, the computer doesn't start, and no sound from the pc speaker either. Is there any way to recover my files from the old HDD and transfer them to my new one? I do not want to use the old HDD any more, i only need to save all my files.

    Read the article

  • Perl - WWW::Mechanize Cookie Session Id is being reset with every get(), how to make it stop?

    - by Phill Pafford
    So I'm scraping a site that I have access to via HTTPS, I can login and start the process but each time I hit a new page (URL) the cookie Session Id changes. How do I keep the logged in Cookie Session Id? #!/usr/bin/perl -w use strict; use warnings; use WWW::Mechanize; use HTTP::Cookies; use LWP::Debug qw(+); use HTTP::Request; use LWP::UserAgent; use HTTP::Request::Common; my $un = 'username'; my $pw = 'password'; my $url = 'https://subdomain.url.com/index.do'; my $agent = WWW::Mechanize->new(cookie_jar => {}, autocheck => 0); $agent->{onerror}=\&WWW::Mechanize::_warn; $agent->agent('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.3) Gecko/20100407 Ubuntu/9.10 (karmic) Firefox/3.6.3'); $agent->get($url); $agent->form_name('form'); $agent->field(username => $un); $agent->field(password => $pw); $agent->click("Log In"); print "After Login Cookie: "; print $agent->cookie_jar->as_string(); print "\n\n"; my $searchURL='https://subdomain.url.com/search.do'; $agent->get($searchURL); print "After Search Cookie: "; print $agent->cookie_jar->as_string(); print "\n"; The output: After Login Cookie: Set-Cookie3: JSESSIONID=367C6D; path="/thepath"; domain=subdomina.url.com; path_spec; secure; discard; version=0 After Search Cookie: Set-Cookie3: JSESSIONID=855402; path="/thepath"; domain=subdomain.com.com; path_spec; secure; discard; version=0 Also I think the site requires a CERT (Well in the browser it does), would this be the correct way to add it? $ENV{HTTPS_CERT_FILE} = 'SUBDOMAIN.URL.COM'; ## Insert this after the use HTTP::Request... Also for the CERT In using the first option in this list, is this correct? X.509 Certificate (PEM) X.509 Certificate with chain (PEM) X.509 Certificate (DER) X.509 Certificate (PKCS#7) X.509 Certificate with chain (PKCS#7)

    Read the article

  • Placing Varibles into an external Sheet

    - by Leslie Peer
    Trying to Build an Online D&d program which stores the character info into Tables my problem is the game works just fine while your playing but as soon as you exit game all varibles are lost which means you have to restart from scratch the next time you log on... So this is a Two Fold Question What is the Best type of External Sheet to save it on... and two How to access sheet for saving and Loading Below are Varibles <SCRIPT> Name1="Tabor Bloomfield"; Name2="Sam Wrightfield"; Name3="Gavin Hartfild"; Name4="Gail Quickfoot"; Name5="Robert Gragorian"; Name6="Peter Shain"; Class1="MagicUser"; Class2="Fighter"; Class3="Fighter"; Class4="Thief"; Class5="Cleric"; Class6="Fighter"; Level1=23; Level2=1; Level3=1; Level4=2; Level5=2; Level6=1; Hpts1=145; Hpts2=14; Hpts3=13; Hpts4=8; Hpts5=12; Hpts6=15; Armor1="Robe of Protection +5"; Armor2="Splinted Armor"; Armor3="Chain Armor"; Armor4="Leather Armor"; Armor5="Chain Armor"; Armor6="Splinted Armor"; Ac1a=5; Ac2a=3; Ac3a=3; Ac4a=4; Ac5a=2; Ac6a=3; Armor1b="Ring of Protection +5"; Armor2b="Small Shield"; Armor3b="Small Shield"; Armor4b="Wooden Shield"; Armor5b="Large Shield"; Armor6b="Small Shield"; Ac1b=5; Ac2b=1; Ac3b=1; Ac4b=1; Ac5b=1; Ac6b=1; Str1=21; Str2=16; Str3=14; Str4=13; Str5=14; Str6=13; Int1=19; Int2=11; Int3=12; Int4=13; Int5=14; Int6=13; Wis1=18; Wis2=12; Wis3=14; Wis4=13; Wis5=14; Wis6=12; Dex1=19; Dex2=14; Dex3=13; Dex4=15; Dex5=14; Dex6=12; Con1=19; Con2=15; Con3=16; Con4=13; Con5=12; Con6=10; Chr1=21; Chr2=14; Chr3=13; Chr4=12; Chr5=14; Chr6=13; </SCRIPT> File name ="gamestats" Path="trellian Webpage/droves E and F/gamestats have tryed html Page,Javascript,Creating a serperate table page and putting the varibles into cells...But at a lost on how to arrive at a solution

    Read the article

  • Servlet Filter: Socket need to be referenced in doFilter()

    - by Craig m
    Right now I have a filter that has the sockets opened in the init and for some reason when I open them in doFilter() it doesn't work with the server app right so I have no choice but to put it in the init I need to be able to reference the outSide.println("test"); in doFilter() so I can send that to my server app every time the if statement it in is is tripped. Heres my code: import java.net.*; import java.io.*; import java.util.*; import javax.servlet.*; import javax.servlet.http.*; public final class IEFilter implements Filter { public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { String browser = ""; String blockInfo; String address = request.getRemoteAddr(); if(((HttpServletRequest)request).getHeader ("User-Agent").indexOf("MSIE") >= 0) { browser = "Internet Explorer"; } if(browser.equals("Internet Explorer")) { BufferedWriter fW = new BufferedWriter(new FileWriter("C://logs//IElog.rtf")); blockInfo = "Blocked IE user from:" + address; response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.println("<HTML>"); out.println("<HEAD>"); out.println("<TITLE>"); out.println("This page is not available - JNetProtect"); out.println("</TITLE>"); out.println("</HEAD>"); out.println("<BODY>"); out.println("<center><H1>Error 403</H1>"); out.println("<br>"); out.println("<br>"); out.println("<H1>Access Denied</H1>"); out.println("<br>"); out.println("Sorry, that resource may not be accessed now."); out.println("<br>"); out.println("<br>"); out.println("<hr />"); out.println("<i>Page Filtered By JNetProtect</i>"); out.println("</BODY>"); out.println("</HTML>"); //init.outSide.println("Blocked and Internet Explorer user"); fW.write(blockInfo); fW.newLine(); fW.close(); } else { chain.doFilter(request, response); } } public void destroy() { outsocket.close(); outSide.close(); } public void init(FilterConfig filterConfig) { try { ServerSocket fs; Socket outsocket; PrintWriter outSide ; outsocket = new Socket("Localhost", 1337); outSide = new PrintWriter(outsocket.getOutputStream(), true); }catch (Exception e){ System.out.println("error with this connection"); e.printStackTrace();} } }

    Read the article

  • Analyzing bitmaps produced by NSAffineTransform and CILineOverlay filters

    - by Adam
    I am trying to manipulate an image using a chain of CIFilters, and then examine each byte of the resulting image (bitmap). Long term, I do not need to display the resulting image (bitmap) -- I just need to "analyze" it in memory. But near-term I am displaying it on screen, to help with debugging. I have some "bitmap examination" code that works as expected when examining the NSImage (bitmap representation) I use as my input (loaded from a JPG file into an NSImage). And it SOMETIMES works as expected when I use it on the outputBitmap produced by the code below. More specifically, when I use an NSAffineTransform filter to create outputBitmap, then outputBitmap contains the data I would expect. But if I use a CILineOverlay filter to create the outputBitmap, none of the bytes in the bitmap have any data in them. I believe both of these filters are working as expected, because when I display their results on screen (via outputImageView), they look "correct." Yet when I examine the outputBitmaps, the one created from the CILineOverlay filter is "empty" while the one created from NSAffineTransfer contains data. Furthermore, if I chain the two filters together, the final resulting bitmap only seems to contain data if I run the AffineTransform last. Seems very strange, to me??? My understanding (from reading the CI programming guide) is that the CIImage should be considered an "image recipe" rather than an actual image, because the image isn't actually created until the image is "drawn." Given that, it would make sense that the CIimage bitmap doesn't have data -- but I don't understand why it has data after I run the NSAffineTransform but doesn't have data after running the CILineOverlay transform? Basically, I am trying to determine if creating the NSCIImageRep (ir in the code below) from the CIImage (myResult) is equivalent to "drawing" the CIImage -- in other words if that should force the bitmap to be populated? If someone knows the answer to this please let me know -- it will save me a few hours of trial and error experimenting! Finally, if the answer is "you must draw to a graphics context" ... then I have another question: would I need to do something along the lines of what is described in the Quartz 2D Programming Guide: Graphics Contexts, listing 2-7 and 2-8: drawing to a bitmap graphics context? That is the path down which I am about to head ... but it seems like a lot of code just to force the bitmap data to be dumped into an array where I can get at it. So if there is an easier or better way please let me know. I just want to take the data (that should be) in myResult and put it into a bitmap array where I can access it at the byte level. And since I already have code that works with an NSBitmapImageRep, unless doing it that way is a bad idea for some reason that is not readily apparent to me, then I would prefer to "convert" myResult into an NSBitmapImageRep. CIImage * myResult = [transform valueForKey:@"outputImage"]; NSImage *outputImage; NSCIImageRep *ir = [NSCIImageRep alloc]; ir = [NSCIImageRep imageRepWithCIImage:myResult]; outputImage = [[[NSImage alloc] initWithSize: NSMakeSize(inputImage.size.width, inputImage.size.height)] autorelease]; [outputImage addRepresentation:ir]; [outputImageView setImage: outputImage]; NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: myResult]; Thanks, Adam

    Read the article

  • Spring Security and the Synchronizer Token J2EE pattern, problem when authentication fails.

    - by dfuse
    Hey, we are using Spring Security 2.0.4. We have a TransactionTokenBean which generates a unique token each POST, the bean is session scoped. The token is used for the duplicate form submission problem (and security). The TransactionTokenBean is called from a Servlet filter. Our problem is the following, after a session timeout occured, when you do a POST in the application Spring Security redirects to the logon page, saving the original request. After logging on again the TransactionTokenBean is created again, since it is session scoped, but then Spring forwards to the originally accessed url, also sending the token that was generated at that time. Since the TransactionTokenBean is created again, the tokens do not match and our filter throws an Exception. I don't quite know how to handle this elegantly, (or for that matter, I can't even fix it with a hack), any ideas? This is the code of the TransactionTokenBean: public class TransactionTokenBean implements Serializable { public static final int TOKEN_LENGTH = 8; private RandomizerBean randomizer; private transient Logger logger; private String expectedToken; public String getUniqueToken() { return expectedToken; } public void init() { resetUniqueToken(); } public final void verifyAndResetUniqueToken(String actualToken) { verifyUniqueToken(actualToken); resetUniqueToken(); } public void resetUniqueToken() { expectedToken = randomizer.getRandomString(TOKEN_LENGTH, RandomizerBean.ALPHANUMERICS); getLogger().debug("reset token to: " + expectedToken); } public void verifyUniqueToken(String actualToken) { if (getLogger().isDebugEnabled()) { getLogger().debug("verifying token. expected=" + expectedToken + ", actual=" + actualToken); } if (expectedToken == null || actualToken == null || !isValidToken(actualToken)) { throw new IllegalArgumentException("missing or invalid transaction token"); } if (!expectedToken.equals(actualToken)) { throw new InvalidTokenException(); } } private boolean isValidToken(String actualToken) { return StringUtils.isAlphanumeric(actualToken); } public void setRandomizer(RandomizerBean randomizer) { this.randomizer = randomizer; } private Logger getLogger() { if (logger == null) { logger = Logger.getLogger(TransactionTokenBean.class); } return logger; } } and this is the Servlet filter (ignore the Ajax stuff): public class SecurityFilter implements Filter { static final String AJAX_TOKEN_PARAM = "ATXTOKEN"; static final String TOKEN_PARAM = "TXTOKEN"; private WebApplicationContext webApplicationContext; private Logger logger = Logger.getLogger(SecurityFilter.class); public void init(FilterConfig config) { setWebApplicationContext(WebApplicationContextUtils.getWebApplicationContext(config.getServletContext())); } public void destroy() { } public void doFilter(ServletRequest req, ServletResponse response, FilterChain chain) throws IOException, ServletException { HttpServletRequest request = (HttpServletRequest) req; if (isPostRequest(request)) { if (isAjaxRequest(request)) { log("verifying token for AJAX request " + request.getRequestURI()); getTransactionTokenBean(true).verifyUniqueToken(request.getParameter(AJAX_TOKEN_PARAM)); } else { log("verifying and resetting token for non-AJAX request " + request.getRequestURI()); getTransactionTokenBean(false).verifyAndResetUniqueToken(request.getParameter(TOKEN_PARAM)); } } chain.doFilter(request, response); } private void log(String line) { if (logger.isDebugEnabled()) { logger.debug(line); } } private boolean isPostRequest(HttpServletRequest request) { return "POST".equals(request.getMethod().toUpperCase()); } private boolean isAjaxRequest(HttpServletRequest request) { return request.getParameter("AJAXREQUEST") != null; } private TransactionTokenBean getTransactionTokenBean(boolean ajax) { return (TransactionTokenBean) webApplicationContext.getBean(ajax ? "ajaxTransactionTokenBean" : "transactionTokenBean"); } void setWebApplicationContext(WebApplicationContext context) { this.webApplicationContext = context; } }

    Read the article

  • Losing sessions on GlassFish

    - by synti
    I have a web application that logs users in a @SessionScoped managed bean. It's all the basic stuff, pretty much like this: users logs in using regular http form and gets redirect to user area (wich is protected using a filter). But if any resource on that area is accessed, the request somehow uses a new session, wich has no managed bean, no user, and the filter does his job, redirecting him to login page. Here's the login form: <h:form> <h:outputLabel for="email" value="Email "/> <p:inputText id="email" size="30" value="#{loginManager.email}"/> <h:outputLabel for="password" value="Password "/> <p:password id="password" size="12" value="#{loginManager.password}"/> <p:commandButton value="Login" action="#{loginManager.login()}"/> </h:form> The loginManager managed bean: @ManagedBean @SessionScoped public class LoginManager implements Serializable { @EJB private UserService userService; private User user; private String email; private String password; public String login() { user = userService.findBy(email, password); if (user == null) { // FacesMessage stuff } else { return "/user/welcome.xhtml?faces-redirect=true"; } } public String logout() { FacesContext.getCurrentInstance().getExternalContext().invalidateSession(); return "/index.xhtml?faces-redirect=true"; } // Getters, setters (no setter for user) and serialVersionUID And then comes the filter that protects the user area: @WebFilter(urlPatterns="/user/*", displayName="UserFilter") public class UserFilter implements Filter { @Override public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { HttpSession session = ((HttpServletRequest)request).getSession(false); LoginManager loginManager = (LoginManager) session.getAttribute("loginManager"); if (loginManager == null || !loginManager.hasUser()) { HttpServletResponse resp = (HttpServletResponse) response; resp.sendRedirect("index.xhtml"); } final User user = loginManager.getUser(); if (user.isValid()) { chain.doFilter(request, response); } else { HttpServletResponse resp = (HttpServletResponse) response; resp.sendRedirect("index.xhtml"); } } The UserService is just a stateless EJB that handles persistence. Part of the JSF for user area: <h:form> <p:panelMenu> <p:submenu label="Items"> <p:menuitem value="Add item" action="#{userItens.addItems}" ajax="false"/> <p:menuitem value="My items" /> </p:submenu> </p:panelMenu> </h:form> And finally the userItens managed bean. @ManagedBean @RequestScoped public class UserItens { private User user; @PostConstruct private void init() { HttpSession session = (HttpSession) FacesContext.getCurrentInstance() .getExternalContext().getSession(false); LoginManager loginManager = (LoginManager) session.getAttribute("loginManager"); if (loginManager != null) user = loginManager.getUser(); } public String addItems() { // Doesn't get here. Seems like UserFilter comes first, doesn't find // an user and redirects. } I'm using glassfish and session timeout is now on 0.

    Read the article

  • Why am I getting a new session ID on every page fetch in my Perl WWW::Mechanize script?

    - by Phill Pafford
    So I'm scraping a site that I have access to via HTTPS, I can login and start the process but each time I hit a new page (URL) the cookie Session Id changes. How do I keep the logged in Cookie Session Id? #!/usr/bin/perl -w use strict; use warnings; use WWW::Mechanize; use HTTP::Cookies; use LWP::Debug qw(+); use HTTP::Request; use LWP::UserAgent; use HTTP::Request::Common; my $un = 'username'; my $pw = 'password'; my $url = 'https://subdomain.url.com/index.do'; my $agent = WWW::Mechanize->new(cookie_jar => {}, autocheck => 0); $agent->{onerror}=\&WWW::Mechanize::_warn; $agent->agent('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.3) Gecko/20100407 Ubuntu/9.10 (karmic) Firefox/3.6.3'); $agent->get($url); $agent->form_name('form'); $agent->field(username => $un); $agent->field(password => $pw); $agent->click("Log In"); print "After Login Cookie: "; print $agent->cookie_jar->as_string(); print "\n\n"; my $searchURL='https://subdomain.url.com/search.do'; $agent->get($searchURL); print "After Search Cookie: "; print $agent->cookie_jar->as_string(); print "\n"; The output: After Login Cookie: Set-Cookie3: JSESSIONID=367C6D; path="/thepath"; domain=subdomina.url.com; path_spec; secure; discard; version=0 After Search Cookie: Set-Cookie3: JSESSIONID=855402; path="/thepath"; domain=subdomain.com.com; path_spec; secure; discard; version=0 Also I think the site requires a CERT (Well in the browser it does), would this be the correct way to add it? $ENV{HTTPS_CERT_FILE} = 'SUBDOMAIN.URL.COM'; ## Insert this after the use HTTP::Request... Also for the CERT In using the first option in this list, is this correct? X.509 Certificate (PEM) X.509 Certificate with chain (PEM) X.509 Certificate (DER) X.509 Certificate (PKCS#7) X.509 Certificate with chain (PKCS#7)

    Read the article

  • JavaScript String Library - Hitting a Minor Roadblock

    - by OneNerd
    Ok - am trying to create a string library that contains a handful of useful things missing from JavaScript. Here is what I have so far: ;function $__STRING__$(in_string) { /* internal functions */ this.s = in_string; this.toString = function(){return this.s;}; /******* these functions CAN be chained (they return the $__STRING__$ object) ******/ this.uppercase = function(){this.s = this.s.toUpperCase(); return this;}; this.lowercase = function(){this.s = this.s.toLowerCase(); return this;}; this.trim = function(){this.s = this.s.replace(/^\s+|\s+$/g,""); return this;}; this.ltrim = function(){this.s = this.s.replace(/^\s+/,""); return this;}; this.rtrim = function(){this.s = this.s.replace(/\s+$/,""); return this;}; this.striptags = function(){this.s = this.s.replace(/<\/?[^>]+(>|$)/g, ""); return this;}; this.escapetags = function(){this.s = this.s.replace(/</g,"<").replace(/>/g,">"); return this;}; this.unescapetags = function(){this.s = this.s.replace(/</g,"<").replace(/>/g,">"); return this;}; this.underscorize = function(){this.s = this.s.replace(/ /g,"_"); return this;}; this.dasherize = function(){this.s = this.s.replace(/ /g,"-"); return this;}; this.spacify = function(){this.s = this.s.replace(/_/g," "); return this;}; this.left = function(length){this.s = this.s.substring(length,0); return this;}; this.right = function(length){this.s = this.s.substring(this.s.length,this.s.length-length); return this;}; this.shorten = function(length){if(this.s.length<=length){return this.s;}else{this.left(this.s,length)+"..."; return this;}}; this.mid = function(start,length){return this.s.substring(start,(length+start));}; this._down = function(){return this.s;}; // breaks chain, but lets you run core js string functions /******* these functions CANNOT be chained (they do not return the $__STRING__$ object) ******/ this.contains = function(needle){if(this.s.indexOf(needle)!==-1){return true;}else{return false;}}; this.startswith = function(needle){if(this.left(this.s,needle.length)==needle){return true;}else{return false;}}; this.endswith = function(needle){if(this.right(this.s,needle.length)==needle){return true;}else{return false;};}; } function $E(in_string){return new $__STRING__$(in_string);} String.prototype._enhance = function(){return new $__STRING__$(this);}; String.prototype._up = function(){return new $__STRING__$(this);}; It works fairly well, and I can chain commands etc. I set it up so I can cast a string as an enhanced string these 2 ways: $E('some string'); 'some string'._enhance(); However, each time I want to use a built-in string method, I need to convert it back to a string first. So for now, I put in _down() and _up() methods like so: alert( $E("hello man").uppercase()._down().replace("N", "Y")._up().dasherize() ); alert( "hello man"._enhance().uppercase()._down().replace("N", "Y")._up().dasherize() ); It works fine, but what I really want to do it be able to use all of the built-in functions a string can use. I realize I can just replicate each function inside my object, but I was hoping there was a simpler way. So question is, is there an easy way to do that? Thanks -

    Read the article

  • Applying Unity in dynamic menu

    - by Rajarshi
    I was going through Unity 2.0 to check if it has an effective use in our new application. My application is a Windows Forms application and uses a traditional bar menu (at the top), currently. My UIs (Windows Forms) more or less support Dependency Injection pattern since they all work with a class (Presentation Model Class) supplied to them via the constructor. The form then binds to the properties of the supplied P Model class and calls methods on the P Model class to perform its duties. Pretty simple and straightforward. How P Model reacts to the UI actions and responds to them by co-ordinating with the Domain Class (Business Logic/Model) is irrelevant here and thus not mentioned. The object creation sequence to show up one UI from menu then goes like this - Create Business Model instance Create Presentation Model instance with Business Model instance passed to P Model constructor. Create UI instance with Presentation Model instance passed to UI constructor. My present solution: To show an UI in the method above from my menu I would have to refer all assemblies (Business, PModel, UI) from my Menu class. Considering I have split the modules into a number of physical assemblies, that would be a dificult task to add references to about 60 different assemblies. Also the approach is not very scalable since I would certainly need to release more modules and with this approach I would have to change the source code every time I release a new module. So primarily to avoid the reference of so many assemblies from my Menu class (assembly) I did as below - Stored all the dependency described above in a database table (SQL Server), e.g. ModuleShortCode | BModelAssembly | BModelFullTypeName | PModelAssembly | PModelFullTypeName | UIAssembly | UIFullTypeName Now used a static class named "Launcher" with a method "Launch" as below - Launcher.Launch("Discount") Launcher.Launch("Customers") The Launcher internally uses data from the dependency table and uses Activator.CreateInstance() to create each of the objects and uses the instance as constructor parameter to the next object being created, till the UI is built. The UI is then shown as a modal dialog. The code inside Launcher is somewhat like - Form frm = ResolveForm("Discount"); frm.ShowDialog(); The ResolveForm does the trick of building the chain of objects. Can Unity help me here? Now when I did that I did not have enough information on Unity and now that I have studied Unity I think I have been doing more or less the same thing. So I tried to replace my code with Unity. However, as soon as I started I hit a block. If I try to resolve UI forms in my Menu as Form customers = myUnityContainer.Resolve(); or Form customers = myUnityContainer.Resolve(typeof(Customers)); Then either way, I need to refer to my UI assembly from my Menu assembly since the target Type "Customers" need to be known for Unity to resolve it. So I am back to same place since I would have to refer all UI assemblies from the Menu assembly. I understand that with Unity I would have to refer fewer assemblies (only UI assemblies) but those references are needed which defeats my objectives below - Create the chain of objects dynamically without any assembly reference from Menu assembly. This is to avoid Menu source code changing every time I release a new module. My Menu also is built dynamically from a table. Be able to supply new modules just by supplying the new assemblies and inserting the new Dependency row in the table by a database patch. At this stage, I have a feeling that I have to do it the way I was doing, i.e. Activator.CreateInstance() to fulfil all my objectives. I need to verify whether the community thinks the same way as me or have a better suggestion to solve the problem. The post is really long and I sincerely thank you if you come til this point. Waiting for your valuable suggestions. Rajarshi

    Read the article

  • Sharing a texture resource from DX11 to DX9 to WPF, need to wait for DeviceContext.Flush() to finish

    - by Rei Miyasaka
    I'm following these instructions on TheCodeProject for rendering from DirectX to WPF using D3DImage. The trouble is that now that I have no swap chain to call Present() on -- which according to the article shouldn't be a problem, but it definitely wasn't copying my back buffer. An additional step that I have to take before I can copy the texture to WPF is to share it with a second D3D9Ex device, since D3DImage only works with DX9 (which is understandable, as WPF is built on DX9). To that end, I've modified some SlimDX code to work with DirectX 11. I tried calling DeviceContext.Flush() (the Immediate one) at the end of each render cycle, which kind of works -- most of the time it'll show my renderings, but maybe for maybe 3 or 4 out of 60 frames each second, it'll draw my clear color instead. This makes sense -- Flush() is non-blocking; it doesn't wait for the GPU to do its thing the way SwapChain.Present does. Any idea what the proper solution is? I have a feeling it has something to do with my texture parameters for the back buffer, but I don't know.

    Read the article

  • Parallelism in .NET – Part 7, Some Differences between PLINQ and LINQ to Objects

    - by Reed
    In my previous post on Declarative Data Parallelism, I mentioned that PLINQ extends LINQ to Objects to support parallel operations.  Although nearly all of the same operations are supported, there are some differences between PLINQ and LINQ to Objects.  By introducing Parallelism to our declarative model, we add some extra complexity.  This, in turn, adds some extra requirements that must be addressed. In order to illustrate the main differences, and why they exist, let’s begin by discussing some differences in how the two technologies operate, and look at the underlying types involved in LINQ to Objects and PLINQ . LINQ to Objects is mainly built upon a single class: Enumerable.  The Enumerable class is a static class that defines a large set of extension methods, nearly all of which work upon an IEnumerable<T>.  Many of these methods return a new IEnumerable<T>, allowing the methods to be chained together into a fluent style interface.  This is what allows us to write statements that chain together, and lead to the nice declarative programming model of LINQ: double min = collection .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Other LINQ variants work in a similar fashion.  For example, most data-oriented LINQ providers are built upon an implementation of IQueryable<T>, which allows the database provider to turn a LINQ statement into an underlying SQL query, to be performed directly on the remote database. PLINQ is similar, but instead of being built upon the Enumerable class, most of PLINQ is built upon a new static class: ParallelEnumerable.  When using PLINQ, you typically begin with any collection which implements IEnumerable<T>, and convert it to a new type using an extension method defined on ParallelEnumerable: AsParallel().  This method takes any IEnumerable<T>, and converts it into a ParallelQuery<T>, the core class for PLINQ.  There is a similar ParallelQuery class for working with non-generic IEnumerable implementations. This brings us to our first subtle, but important difference between PLINQ and LINQ – PLINQ always works upon specific types, which must be explicitly created. Typically, the type you’ll use with PLINQ is ParallelQuery<T>, but it can sometimes be a ParallelQuery or an OrderedParallelQuery<T>.  Instead of dealing with an interface, implemented by an unknown class, we’re dealing with a specific class type.  This works seamlessly from a usage standpoint – ParallelQuery<T> implements IEnumerable<T>, so you can always “switch back” to an IEnumerable<T>.  The difference only arises at the beginning of our parallelization.  When we’re using LINQ, and we want to process a normal collection via PLINQ, we need to explicitly convert the collection into a ParallelQuery<T> by calling AsParallel().  There is an important consideration here – AsParallel() does not need to be called on your specific collection, but rather any IEnumerable<T>.  This allows you to place it anywhere in the chain of methods involved in a LINQ statement, not just at the beginning.  This can be useful if you have an operation which will not parallelize well or is not thread safe.  For example, the following is perfectly valid, and similar to our previous examples: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); However, if SomeOperation() is not thread safe, we could just as easily do: double min = collection .Select(item => item.SomeOperation()) .AsParallel() .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); In this case, we’re using standard LINQ to Objects for the Select(…) method, then converting the results of that map routine to a ParallelQuery<T>, and processing our filter (the Where method) and our aggregation (the Min method) in parallel. PLINQ also provides us with a way to convert a ParallelQuery<T> back into a standard IEnumerable<T>, forcing sequential processing via standard LINQ to Objects.  If SomeOperation() was thread-safe, but PerformComputation() was not thread-safe, we would need to handle this by using the AsEnumerable() method: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .AsEnumerable() .Min(item => item.PerformComputation()); Here, we’re converting our collection into a ParallelQuery<T>, doing our map operation (the Select(…) method) and our filtering in parallel, then converting the collection back into a standard IEnumerable<T>, which causes our aggregation via Min() to be performed sequentially. This could also be written as two statements, as well, which would allow us to use the language integrated syntax for the first portion: var tempCollection = from item in collection.AsParallel() let e = item.SomeOperation() where (e.SomeProperty > 6 && e.SomeProperty < 24) select e; double min = tempCollection.AsEnumerable().Min(item => item.PerformComputation()); This allows us to use the standard LINQ style language integrated query syntax, but control whether it’s performed in parallel or serial by adding AsParallel() and AsEnumerable() appropriately. The second important difference between PLINQ and LINQ deals with order preservation.  PLINQ, by default, does not preserve the order of of source collection. This is by design.  In order to process a collection in parallel, the system needs to naturally deal with multiple elements at the same time.  Maintaining the original ordering of the sequence adds overhead, which is, in many cases, unnecessary.  Therefore, by default, the system is allowed to completely change the order of your sequence during processing.  If you are doing a standard query operation, this is usually not an issue.  However, there are times when keeping a specific ordering in place is important.  If this is required, you can explicitly request the ordering be preserved throughout all operations done on a ParallelQuery<T> by using the AsOrdered() extension method.  This will cause our sequence ordering to be preserved. For example, suppose we wanted to take a collection, perform an expensive operation which converts it to a new type, and display the first 100 elements.  In LINQ to Objects, our code might look something like: // Using IEnumerable<SourceClass> collection IEnumerable<ResultClass> results = collection .Select(e => e.CreateResult()) .Take(100); If we just converted this to a parallel query naively, like so: IEnumerable<ResultClass> results = collection .AsParallel() .Select(e => e.CreateResult()) .Take(100); We could very easily get a very different, and non-reproducable, set of results, since the ordering of elements in the input collection is not preserved.  To get the same results as our original query, we need to use: IEnumerable<ResultClass> results = collection .AsParallel() .AsOrdered() .Select(e => e.CreateResult()) .Take(100); This requests that PLINQ process our sequence in a way that verifies that our resulting collection is ordered as if it were processed serially.  This will cause our query to run slower, since there is overhead involved in maintaining the ordering.  However, in this case, it is required, since the ordering is required for correctness. PLINQ is incredibly useful.  It allows us to easily take nearly any LINQ to Objects query and run it in parallel, using the same methods and syntax we’ve used previously.  There are some important differences in operation that must be considered, however – it is not a free pass to parallelize everything.  When using PLINQ in order to parallelize your routines declaratively, the same guideline I mentioned before still applies: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Upgrading log shipping from 2005 to 2008 or 2008R2

    - by DavidWimbush
    If you're using log shipping you need to be aware of some small print. The general idea is to upgrade the secondary server first and then the primary server because you can continue to log ship from 2005 to 2008R2. But this won't work if you're keeping your secondary databases in STANDBY mode rather than IN RECOVERY. If you're using native log shipping you'll have some work to do. If you've rolled your own log shipping (ahem) you can convert a STANDBY database to IN RECOVERY like this:   restore database [dw]   with norecovery; and then change your restore code to use WITH NORECOVERY instead of WITH STANDBY. (Finally all that aggravation pays off!) You can either upgrade the secondary server in place or rebuild it. A secondary database doesn't actually get upgraded until you recover it so the log sequence chain is not broken and you can continue shipping from the primary. Just remember that it can take quite some time to upgrade a database so you need to factor that into the expectations you give people about how long it will take to fail over. For more details, check this out: http://msdn.microsoft.com/en-us/library/cc645954(SQL.105).aspx

    Read the article

  • Reminder: Premier Support for 10gR2 10.2.0.4 Database ends July 2010

    - by Steven Chan
    Regular readers know that Premier Support for the Oracle 10gR2 Database ends in July 2010, a scant few months from now.  What does that mean for E-Business Suite environments running on this database?The Oracle E-Business Suite is comprised of products like Financials, Supply Chain, Procurement, and so on.  Support windows for the E-Business Suite and these associated applications products are listed here:Oracle Lifetime Support > "Lifetime Support Policy: Oracle Applications" (PDF)The Oracle E-Business Suite can run on a variety of database releases, including 10gR2, 11gR1, and 11gR2.  Support windows for database releases are listed here:Oracle Lifetime Support > "Lifetime Support Policy: Oracle Technology Products" (PDF)Looking at those two documents together, you'll see that:Premier Support for Oracle E-Business Suite Release 11i ends on November 30, 2010Premier Support for Oracle E-Business Suite Release 12 ends on January 31, 2012Premier Support for Oracle E-Business Suite Release 12.1 ends on May 31, 2014Premier Support for Oracle Database 10.2 (a.k.a. 10gR2) ends on July 31, 2010[Note: These are the Premier Support dates as of today.  If you've arrived at this article in the future via a search engine, you must check the latest dates in the Lifetime Support Policy documents above; these dates are subject to change.]It's a bit hard to read, thanks to the layout restrictions of this blog, but the following diagram shows the Premier and Extended Support windows for the last four major database releases certified with Apps 11i:Do the EBS Premier Support dates trump the 10gR2 DB date?No.  Each of the support policies apply individually to your combined EBS + DB configuration.  The support dates for a given EBS release don't override the Database support policy.

    Read the article

  • push email / email server tutorial

    - by David A
    Does anyone happen to know the current status of push email in the linux world? From my searching at the moment I have seen Z-push http://www.ifusio.com/blog/setup-your-own-push-mail-server-with-z-push-on-debian-linux and https://peterkieser.com/2011/03/25/androids-k-9-mail-battery-life-and-dovecots-push-imap/ Are there other solutions? Does anyone have any experiences with these? They're somewhat different in that Z-push seems to work in conjunction with an existing imap server? Some time ago I did manage to compile and build Dovecot 2 (since only Dovecot 1 was available in the Ubuntu repos at the time), it would have been a real fluke because I had no idea what I was doing but it seemed to work well with my mobile phone, that said, I can't say for sure that it was pushing, but it seemed like it. Anyway, I'm here again and looking to set up a mail server. I'm hoping to do a better of a job this time around with virtual users and such. Without installing ispconfig3 (or something similar), does anyone have any recent email server tutorials (that cover all aspects MTA, MDA...) that can supply push email on a Ubuntu 12.04 server? (I'm probably of slightly above newb status, but not far) Thanks a bunch

    Read the article

  • Rush…iPAD Pre-order announced officially

    - by samsudeen
    Apple’s latest product iPAD is now available for pre-order through online. You can place your pre-order through its online store (Apple) or reserve it at any of the Apple retail stores. iPAD may have received mixed reactions when announced last month. But Apple knows how to sell; it is believed that more than 50,000 pre orders are already placed till now placed till now. People have to wait for another 3 weeks to get the actual device as the launch date is 3rd of April in the US. The initial model released will be available only with Wi-Fi and the planned 3G model is expected to be released by end of April. So how much does it cost you to get this little marvel? The basic iPAD (16 GB Wi-Fi) will cost you $499. if you are serious apple fan and plan to buy an iPAD better place your order now. There already rumours that the initial demand may outstrip supply.The pre-order is limited only to US. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • David Cameron addresses - The Oracle Retail Week Awards 2012

    - by user801960
    The Oracle Retail Week Awards 2012 were last night. In case you missed the action the introduction video for the Oracle Retail Week Awards 2012 is below, featuring interviews with UK Prime Minister David Cameron, Acting Editor of Retail Week George MacDonald, the judges for the awards and key figureheads in British retail. Check back on the blog in the next couple of days for more videos, interviews and insights from the awards. Oracle Retail and "Your Experience Platform" Technology is the key to providing that differentiated retail experience. More specifically, it is what we at Oracle call ‘the experience platform’ - a set of integrated, cross-channel business technology solutions, selected and operated by a retail business and IT team, and deployed in accordance with that organisation’s individual strategy and processes. This business systems architecture simultaneously: Connects customer interactions across all channels and touchpoints, and every customer lifecycle phase to provide a differentiated customer experience that meets consumers’ needs and expectations. Delivers actionable insight that enables smarter decisions in planning, forecasting, merchandising, supply chain management, marketing, etc; Optimises operations to align every aspect of the retail business to gain efficiencies and economies, to align KPIs to eliminate strategic conflicts, and at the same time be working in support of customer priorities.   Working in unison, these three goals not only help retailers to successfully navigate the challenges of today (identified in the previous session on this stage) but also to focus on delivering that personalised customer experience based on differentiated products, pricing, services and interactions that will help you to gain market share and grow sales.

    Read the article

  • SmartAssembly Support: How to change the maps folder

    - by Bart Read
    If you've set up SmartAssembly to store error reports in a SQL Server database, you'll also have specified a folder for the map files that are used to de-obfuscate error reports (see Figure 1). Whilst you can change the database easily enough you can't change the map folder path via the UI - if you click on it, it'll just open the folder in Explorer - but never fear, you can change it manually and fortunately it's not that difficult. (If you want to get to these settings click the Tools > Options link on the left-hand side of the SmartAssembly main window.)   Figure 1. Error reports database settings in SmartAssembly. The folder path is actually stored in the database, so you just need to open up SQL Server Management Studio, connect to the SQL Server where your error reports database is stored, then open a new query on the SmartAssembly database by right-clicking on it in the Object Explorer, then clicking New Query (see figure 2).     Figure 2. Opening a new query against the SmartAssembly error reports database in SQL Server. Now execute the following SQL query in the new query window: SELECT * FROM dbo.Information You should find that you get a result set rather like that shown in figure 3. You can see that the map folder path is stored in the MapFolderNetworkPath column.   Figure 3. Contents of the dbo.Information table, showing the map folder path I set in SmartAssembly. All I need to do to change this is execute the following SQL: UPDATE dbo.Information SET MapFolderNetworkPath = '\\UNCPATHTONEWFOLDER' WHERE MapFolderNetworkPath = '\\dev-ltbart\SAMaps' This will change the map folder path to whatever I supply in the SET clause. Once you've done this, you can verify the change by executing the following again: SELECT * FROM dbo.Information You should find the result set contains the new path you've set.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >