Search Results

Search found 1926 results on 78 pages for 'cookie monster'.

Page 63/78 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • Are we in demand?

    - by dotnetdev
    I was made redundant in the end of November. This wasn't because I lacked required skills (although I'm a youngster and in career levels a junior dev - though I knew a lot more than was called for in my job). Anyway, I was laid off due to the whole recession/credit crunch thing going on. I worked for a small company and money got tight and I had to go. I haven't made a thread about this but I have seen threads about others being laid off and experiencing a similar fate. This leads me to the question: What is the job market like for developers? Are we in demand? I ask this question on a global level, but I live in London UK (in case anyone comes across this thread from the same area). I am a .NET dev but my secondary skillset is Flex (actionscript too) and Java, which my personal portfolio is made with. I hope to be strong enough in this to do this commercially, with a few more months of practise. Then I will have more jobs applicable to me. Unfortunately, I use agencies and sites like Jobserve/Monster.com but no new jobs are ever posted on there so when you apply to all the relevant jobs, then what? Whatsmore, a lot of companies are putting a freeze on recruitment. Thanks

    Read the article

  • Large Switch statements: Bad OOP?

    - by Mystere Man
    I've always been of the opinion that large switch statements are a symptom of bad OOP design. In the past, I've read articles that discuss this topic and they have provided altnerative OOP based approaches, typically based on polymorphism to instantiate the right object to handle the case. I'm now in a situation that has a monsterous switch statement based on a stream of data from a TCP socket in which the protocol consists of basically newline terminated command, followed by lines of data, followed by an end marker. The command can be one of 100 different commands, so I'd like to find a way to reduce this monster switch statement to something more manageable. I've done some googling to find the solutions I recall, but sadly, Google has become a wasteland of irrelevant results for many kinds of queries these days. Are there any patterns for this sort of problem? Any suggestions on possible implementations? One thought I had was to use a dictionary lookup, matching the command text to the object type to instantiate. This has the nice advantage of merely creating a new object and inserting a new command/type in the table for any new commands. However, this also has the problem of type explosion. I now need 100 new classes, plus I have to find a way to interface them cleanly to the data model. Is the "one true switch statement" really the way to go? I'd appreciate your thoughts, opinions, or comments.

    Read the article

  • How to handle 30k files in a project which requires them?

    - by Jeremiah
    Visual Studio 2010 RC - Silverlight Application We have a library of images that we need to have access to. They are given to us from a vendor (through an installer) and they are not in a database, they are files in a folder (a very large monster of a folder). We do not control when the images change, so the vendor needs to be able to override them individually. We get updates frequently enough from this vendor to state that these images change "randomly" and without our (programmer) knowledge. The problem: I don't want 30K images in SVN. Heck, I don't even want to imagine them in my Solution. However, our application requires them in order to run properly. So, our build/staging servers need access to these images (we have two build servers). The Question: How would you handle it when your application will not work as specified without access to each of 30k images and you don't control when those images change? I'm do not want to have a crazy large SVN repository. Because I don't know when any of these images change, I really don't want them in my solution (definitely do not want a large solution, either). I also don't want a bunch of manual steps to do every time these images change. Our mantra, up to this point, has always been, any developer could download from SVN, compile and run our app. These images are going to kill that mantra. I'm tempted to make a WCF service that will return images if they exist and a dummy image if they don't. This way all dev boxes will return a dummy image and our build/staging/production boxes will return real images (ones that actually have the vendor's image installer installed on). This has to be a solved problem. What have other people done to handle these types of problems? I'm open to suggestions.

    Read the article

  • How To Create a lookup Table with an NSDate for weekly range (over 5 year period)

    - by EarlyMan
    Unsure how to best achieve this. NSDate *date = [NSDate date]; I need to do a lookup on the date and return a string value. 12/17/2011 < date < 12/23/2011 return "20120101" 12/24/2011 < date < 12/30/2012 return "20120102" 12/31/2011 < date < 01/06/2012 return "20120201" ... 10/20/2012 < date < 10/26/2012 return "20122301" ... 11/02/2013 < date < 11/08/2013 return "20132301" .. for 5 years... for each week date can be any date until Dec. 2017. I do not know the logic behind the return strings so I can't simply calculate the string based on the date. The return string (converted to NSDate in the model) is used successfully as my section for my fetchedresultscontroller. I am not sure how to create a lookup table based on an NSDate or if I need some monster if/case statement.

    Read the article

  • Extending NerdDinner: Adding Geolocated Flair

    - by Jon Galloway
    NerdDinner is a website with the audacious goal of “Organizing the world’s nerds and helping them eat in packs.” Because nerds aren’t likely to socialize with others unless a website tells them to do it. Scott Hanselman showed off a lot of the cool features we’ve added to NerdDinner lately during his popular talk at MIX10, Beyond File | New Company: From Cheesy Sample to Social Platform. Did you miss it? Go ahead and watch it, I’ll wait. One of the features we wanted to add was flair. You know about flair, right? It’s a way to let folks who like your site show it off in their own site. For example, here’s my StackOverflow flair: Great! So how could we add some of this flair stuff to NerdDinner? What do we want to show? If we’re going to encourage our users to give up a bit of their beautiful website to show off a bit of ours, we need to think about what they’ll want to show. For instance, my StackOverflow flair is all about me, not StackOverflow. So how will this apply to NerdDinner? Since NerdDinner is all about organizing local dinners, in order for the flair to be useful it needs to make sense for the person viewing the web page. If someone visits from Egypt visits my blog, they should see information about NerdDinners in Egypt. That’s geolocation – localizing site content based on where the browser’s sitting, and it makes sense for flair as well as entire websites. So we’ll set up a simple little callout that prompts them to host a dinner in their area: Hopefully our flair works and there is a dinner near your viewers, so they’ll see another view which lists upcoming dinners near them: The Geolocation Part Generally website geolocation is done by mapping the requestor’s IP address to a geographic area. It’s not an exact science, but I’ve always found it to be pretty accurate. There are (at least) three ways to handle it: You pay somebody like MaxMind for a database (with regular updates) that sits on your server, and you use their API to do lookups. I used this on a pretty big project a few years ago and it worked well. You use HTML 5 Geolocation API or Google Gears or some other browser based solution. I think those are cool (I use Google Gears a lot), but they’re both in flux right now and I don’t think either has a wide enough of an install base yet to rely on them. You might want to, but I’ve heard you do all kinds of crazy stuff, and sometimes it gets you in trouble. I don’t mean talk out of line, but we all laugh behind your back a bit. But, hey, it’s up to you. It’s your flair or whatever. There are some free webservices out there that will take an IP address and give you location information. Easy, and works for everyone. That’s what we’re doing. I looked at a few different services and settled on IPInfoDB. It’s free, has a great API, and even returns JSON, which is handy for Javascript use. The IP query is pretty simple. We hit a URL like this: http://ipinfodb.com/ip_query.php?ip=74.125.45.100&timezone=false … and we get an XML response back like this… <?xml version="1.0" encoding="UTF-8"?> <Response> <Ip>74.125.45.100</Ip> <Status>OK</Status> <CountryCode>US</CountryCode> <CountryName>United States</CountryName> <RegionCode>06</RegionCode> <RegionName>California</RegionName> <City>Mountain View</City> <ZipPostalCode>94043</ZipPostalCode> <Latitude>37.4192</Latitude> <Longitude>-122.057</Longitude> </Response> So we’ll build some data transfer classes to hold the location information, like this: public class LocationInfo { public string Country { get; set; } public string RegionName { get; set; } public string City { get; set; } public string ZipPostalCode { get; set; } public LatLong Position { get; set; } } public class LatLong { public float Lat { get; set; } public float Long { get; set; } } And now hitting the service is pretty simple: public static LocationInfo HostIpToPlaceName(string ip) { string url = "http://ipinfodb.com/ip_query.php?ip={0}&timezone=false"; url = String.Format(url, ip); var result = XDocument.Load(url); var location = (from x in result.Descendants("Response") select new LocationInfo { City = (string)x.Element("City"), RegionName = (string)x.Element("RegionName"), Country = (string)x.Element("CountryName"), ZipPostalCode = (string)x.Element("CountryName"), Position = new LatLong { Lat = (float)x.Element("Latitude"), Long = (float)x.Element("Longitude") } }).First(); return location; } Getting The User’s IP Okay, but first we need the end user’s IP, and you’d think it would be as simple as reading the value from HttpContext: HttpContext.Current.Request.UserHostAddress But you’d be wrong. Sorry. UserHostAddress just wraps HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"], but that doesn’t get you the IP for users behind a proxy. That’s in another header, “HTTP_X_FORWARDED_FOR". So you can either hit a wrapper and then check a header, or just check two headers. I went for uniformity: string SourceIP = string.IsNullOrEmpty(Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; We’re almost set to wrap this up, but first let’s talk about our views. Yes, views, because we’ll have two. Selecting the View We wanted to make it easy for people to include the flair in their sites, so we looked around at how other people were doing this. The StackOverflow folks have a pretty good flair system, which allows you to include the flair in your site as either an IFRAME reference or a Javascript include. We’ll do both. We have a ServicesController to handle use of the site information outside of NerdDinner.com, so this fits in pretty well there. We’ll be displaying the same information for both HTML and Javascript flair, so we can use one Flair controller action which will return a different view depending on the requested format. Here’s our general flow for our controller action: Get the user’s IP Translate it to a location Grab the top three upcoming dinners that are near that location Select the view based on the format (defaulted to “html”) Return a FlairViewModel which contains the list of dinners and the location information public ActionResult Flair(string format = "html") { string SourceIP = string.IsNullOrEmpty( Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; var location = GeolocationService.HostIpToPlaceName(SourceIP); var dinners = dinnerRepository. FindByLocation(location.Position.Lat, location.Position.Long). OrderByDescending(p => p.EventDate).Take(3); // Select the view we'll return. // Using a switch because we'll add in JSON and other formats later. string view; switch (format.ToLower()) { case "javascript": view = "JavascriptFlair"; break; default: view = "Flair"; break; } return View( view, new FlairViewModel { Dinners = dinners.ToList(), LocationName = string.IsNullOrEmpty(location.City) ? "you" : String.Format("{0}, {1}", location.City, location.RegionName) } ); } Note: I’m not in love with the logic here, but it seems like overkill to extract the switch statement away when we’ll probably just have two or three views. What do you think? The HTML View The HTML version of the view is pretty simple – the only thing of any real interest here is the use of an extension method to truncate strings that are would cause the titles to wrap. public static string Truncate(this string s, int maxLength) { if (string.IsNullOrEmpty(s) || maxLength <= 0) return string.Empty; else if (s.Length > maxLength) return s.Substring(0, maxLength) + "..."; else return s; }   So here’s how the HTML view ends up looking: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage<FlairViewModel>" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Nerd Dinner</title> <link href="/Content/Flair.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="nd-wrapper"> <h2 id="nd-header">NerdDinner.com</h2> <div id="nd-outer"> <% if (Model.Dinners.Count == 0) { %> <div id="nd-bummer"> Looks like there's no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target="_blank" href="http://www.nerddinner.com/Dinners/Create">host one</a>?</div> <% } else { %> <h3> Dinners Near You</h3> <ul> <% foreach (var item in Model.Dinners) { %> <li> <%: Html.ActionLink(String.Format("{0} with {1} on {2}", item.Title.Truncate(20), item.HostedBy, item.EventDate.ToShortDateString()), "Details", "Dinners", new { id = item.DinnerID }, new { target = "_blank" })%></li> <% } %> </ul> <% } %> <div id="nd-footer"> More dinners and fun at <a target="_blank" href="http://nrddnr.com">http://nrddnr.com</a></div> </div> </div> </body> </html> You’d include this in a page using an IFRAME, like this: <IFRAME height=230 marginHeight=0 src="http://nerddinner.com/services/flair" frameBorder=0 width=160 marginWidth=0 scrolling=no></IFRAME> The Javascript view The Javascript flair is written so you can include it in a webpage with a simple script include, like this: <script type="text/javascript" src="http://nerddinner.com/services/flair?format=javascript"></script> The goal of this view is very similar to the HTML embed view, with a few exceptions: We’re creating a script element and adding it to the head of the document, which will then document.write out the content. Note that you have to consider if your users will actually have a <head> element in their documents, but for website flair use cases I think that’s a safe bet. Since the content is being added to the existing page rather than shown in an IFRAME, all links need to be absolute. That means we can’t use Html.ActionLink, since it generates relative routes. We need to escape everything since it’s being written out as strings. We need to set the content type to application/x-javascript. The easiest way to do that is to use the <%@ Page ContentType%> directive. <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<NerdDinner.Models.FlairViewModel>" ContentType="application/x-javascript" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> document.write('<script>var link = document.createElement(\"link\");link.href = \"http://nerddinner.com/content/Flair.css\";link.rel = \"stylesheet\";link.type = \"text/css\";var head = document.getElementsByTagName(\"head\")[0];head.appendChild(link);</script>'); document.write('<div id=\"nd-wrapper\"><h2 id=\"nd-header\">NerdDinner.com</h2><div id=\"nd-outer\">'); <% if (Model.Dinners.Count == 0) { %> document.write('<div id=\"nd-bummer\">Looks like there\'s no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target=\"_blank\" href=\"http://www.nerddinner.com/Dinners/Create\">host one</a>?</div>'); <% } else { %> document.write('<h3> Dinners Near You</h3><ul>'); <% foreach (var item in Model.Dinners) { %> document.write('<li><a target=\"_blank\" href=\"http://nrddnr.com/<%: item.DinnerID %>\"><%: item.Title.Truncate(20) %> with <%: item.HostedBy %> on <%: item.EventDate.ToShortDateString() %></a></li>'); <% } %> document.write('</ul>'); <% } %> document.write('<div id=\"nd-footer\"> More dinners and fun at <a target=\"_blank\" href=\"http://nrddnr.com\">http://nrddnr.com</a></div></div></div>'); Getting IP’s for Testing There are a variety of online services that will translate a location to an IP, which were handy for testing these out. I found http://www.itouchmap.com/latlong.html to be most useful, but I’m open to suggestions if you know of something better. Next steps I think the next step here is to minimize load – you know, in case people start actually using this flair. There are two places to think about – the NerdDinner.com servers, and the services we’re using for Geolocation. I usually think about caching as a first attack on server load, but that’s less helpful here since every user will have a different IP. Instead, I’d look at taking advantage of Asynchronous Controller Actions, a cool new feature in ASP.NET MVC 2. Async Actions let you call a potentially long-running webservice without tying up a thread on the server while waiting for the response. There’s some good info on that in the MSDN documentation, and Dino Esposito wrote a great article on Asynchronous ASP.NET Pages in the April 2010 issue of MSDN Magazine. But let’s think of the children, shall we? What about ipinfodb.com? Well, they don’t have specific daily limits, but they do throttle you if you put a lot of traffic on them. From their FAQ: We do not have a specific daily limit but queries that are at a rate faster than 2 per second will be put in "queue". If you stay below 2 queries/second everything will be normal. If you go over the limit, you will still get an answer for all queries but they will be slowed down to about 1 per second. This should not affect most users but for high volume websites, you can either use our IP database on your server or we can whitelist your IP for 5$/month (simply use the donate form and leave a comment with your server IP). Good programming practices such as not querying our API for all page views (you can store the data in a cookie or a database) will also help not reaching the limit. So the first step there is to save the geolocalization information in a time-limited cookie, which will allow us to look up the local dinners immediately without having to hit the geolocation service.

    Read the article

  • Looking into ASP.Net MVC 4.0 Mobile Development - part 2

    - by nikolaosk
    In this post I will be continuing my discussion on ASP.Net MVC 4.0 mobile development. You can have a look at my first post on the subject here . Make sure you read it and understand it well before you move one reading the remaining of this post. I will not be writing any code in this post. I will try to explain a few concepts related to the MVC 4.0 mobile functionality. In this post I will be looking into the Browser Overriding feature in ASP.Net MVC 4.0. By that I mean that we override the user agent for a given user session. This is very useful feature for people who visit a site through a device and they experience the mobile version of the site, but what they really want is the option to be able to switch to the desktop view. "Why they might want to do that?", you might wonder.Well first of all the users of our ASP.Net MVC 4.0 application will appreciate that they have the option to switch views while some others will think that they will enjoy more the contents of our website with the "desktop view" since the mobile device they view our site has a quite large display.  Obviously this is only one site. These are just different views that are rendered.To put it simply, browser overriding lets our application treat requests as if they were coming from a different browser rather than the one they are actually from. In order to do that programmatically we must have a look at the System.Web.WebPages namespace and the classes in it. Most specifically the class BrowserHelpers. Have a look at the picture below   In this class we see some extension methods for HttpContext class.These methods are called extensions-helpers methods and we use them to switch to one browser from another thus overriding the current/actual browser. These APIs have effect on layout,views and partial views and will not affect any other ASP.Net Request.Browser related functionality.The overridden browser is stored in a cookie. Let me explain what some of these methods do. SetOverriddenBrowser() -  let us set the user agent string to specific value GetOverriddenBrowser() -  let us get the overridden value ClearOverriddenBrowser() -  let us remove any overridden user agent for the current request   To recap, in our ASP.Net MVC 4.0 applications when our application is viewed in our mobile devices, we can have a link like "Desktop View" for all those who desperately want to see the site with in full desktop-browser version.We then can specify a browser type override. My controller class (snippet of code) that is responsible for handling the switching could be something like that. public class SwitchViewController : Controller{ public RedirectResult SwitchView(bool mobile, string returnUrl){if (Request.Browser.IsMobileDevice == mobile)HttpContext.ClearOverriddenBrowser();elseHttpContext.SetOverriddenBrowser(mobile ? BrowserOverride.Mobile : BrowserOverride.Desktop);return Redirect(returnUrl);}} Hope it helps!!!!

    Read the article

  • Pretty URL in ADF Faces of JDeveloper 11.1.2.2

    - by Frank Nimphius
    Many features planned for Oracle JDeveloper 12c find their way into current releases of Oracle JDeveloper 11g R1 and JDeveloper 11g R2. One example of such a feature is "pretty URL" - or "clean URL" as the Oracle JDeveloper 11g R2 (11.1.2.2) documentation puts it. "A.2.3.24 Clean URLs Historically, ADF Faces has used URL parameters to hold information, such as window IDs and state. However, URL parameters can prevent search engines from recognizing when URLs are actually the same, and therefore interfere with analytics. URL parameters can also interfere with bookmarking. By default, ADF Faces removes URL parameters using the HTML5 History Management API. If that API is unavailable, then session cookies are used.You can also manually configure how URL parameters are removed using the context parameter oracle.adf.view.rich.prettyURL.OPTIONS. Set the parameter to off so that no parameters are removed. Set the parameter to useHistoryApi to only use the HTML5 History Management API. If a browser does not support this API, then no parameters will be removed. Set the parameter to useCookies to use session cookies to remove parameters. If the browser does not support cookies, then no parameters will be removed." See: http://docs.oracle.com/cd/E26098_01/web.1112/e16181/ap_config.htm#ADFUI12856 So basically, what this part in the documentation says is: In JDeveloper 11g R2 (11.1.2.2), Oracle ADF Faces automatically removes its internally used dynamic parameters from the URL You can influence the setting with the prettyURL.OPTIONS context option, which however is not recommended you to do because the default behavior is able to detect if the browser client supports HTML 5 History management or not. In the latter case it the uses a session cookie and if this doesn't work, falls back to the "old" URL parameter adding. The information that is not so explicit and clearly mentioned in the documentation is that this is only for ADF Faces parameters (such as _afrLoop, Adf-Window-Id, etc.), but not the ADF controller token (_adf.ctrl-state)! Removing the ADF controller token is an enhancement request that will be implemented in Oracle JDeveloper 12c

    Read the article

  • Why do I always think I know much less than others? [closed]

    - by John Kenedy
    I have been in programming since primary 6. Since the time DOS comes, I have been doing programming in quickbasic 4.5, then to VB 6, then to C#. In between I also do programming in C++. But every time I open Stack Overflow and trying to help others answering their problems, it seems that I know nothing. I feel that I am so stupid even I have been in programming for so long. I would shock reading all the questions and unable to find any clue. Is technology moving too fast that left out me? I feel that technology changes too fast and I can't keep up, when I know ASP.NET web form, MVC is out, when I know MVC, android/iphone/HTML5 app is popular. It seems that I am chasing something and never reach 'it'. I don't know whether this is correct place for me to talk about this. I just wish to listen to opinion like you, how do you think technology should grow instead of recreating language, adding bug here and there to let programmer figure it out, while big company share the solution among themselves. This is exactly how I feel. The simple example is how do you think why doesn't Dictionary<> in .NET provide iterating the object using index? Why must we use Key or GetEnumerator(). Developer has to google and read wasted hour of hour of time to find pieces of hack code to use reflection to achieve reading from index. Where developer will keep it as collection and valuable code. HOwever when times come, everything changes again, developer has to find answer for new silly problems again! Yes, I really hate it! I hate how many big companies are playing with the developer by cutting a big picture into small puzzle and messing it up and asking developer to place it together themselves. As if they are creating problems for us to solve it, so we are unable to grow upfront, we are being manipulated by those silly problems they have created. Another sample would how difficult to collect Cookies from CookieContainer without passing the URL, yes without the URL and I WANT to get all cookie in the cookiecontainer without knowing the URL, I want to iterate all. Why does micros0ft have to limit me from doing that?

    Read the article

  • Access denied 403 errors after migrating my site

    - by AgA
    I've recently migrated my Joomla site from one shared hosting to another with Hostgator. GWT notified me about many 403 access denied pages. I've checked with Firebug too, and even though browser is displaying full page correctly but http return is 403. I've checked the home page but it's correctly returing 200 response. The same is shown by Fetch as Google in GWT(pasted this in the bottom). The site is 3 years old and I regularly do such migrations. I've copied the files and database "AS IS". I've even cleared all the caches but no luck. There is only one change: previously the site was primary domain but now it's add-on one. What could be the issue? This is how Googlebot fetched the page. Fetch as Google URL: http://MYSITE.COM/-----------------REMOVED.html Date: Thursday, June 20, 2013 at 10:32:14 PM PDT Googlebot Type: Web Download Time (in milliseconds): 3899 HTTP/1.1 403 Forbidden Date: Fri, 21 Jun 2013 05:32:15 GMT Server: Apache P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" Expires: Mon, 1 Jan 2001 00:00:00 GMT Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: 0e4f6b53991c80cf39d57a6db58bb58d=ee2d880e8db0f1fc03c5612ea5a77004; path=/ Last-Modified: Fri, 21 Jun 2013 05:32:19 GMT Keep-Alive: timeout=5, max=75 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-gb" lang="en-gb" > <head> <base href="http://www.mysite.com/-----------------rajiv-yuva-shakthi-programme-finance-planning.html" /> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="robots" content="index, follow" /> <meta name="keywords" content="" /> <<<<<<TRIMMED>>>>>>>>>>>>>>

    Read the article

  • Plans for Java 7 and E-Business Suite Certification

    - by Steven Chan (Oracle Development)
    As of June 2012, Java 7 has not been certified yet with Oracle E-Business Suite.  EBS customers should continue to run JRE 6 on their Windows end-user desktops, and JDK 6 on their EBS servers. If a search engine has brought you to this article, please check the Certifications summary for our latest certified Java release. Our plans for certifying Java 7 for the E-Business Suite We plan on releasing the Java 7 certification for E-Business Suite customers in two phases: Phase 1: Certify JRE 7 for Windows end-user desktops Phase 2: Certify JDK 7 for server-based components When will Java 7 be certified with EBS? We're working on the first phase now. As usual, I cannot discuss release dates here, but you can monitor or subscribe to this blog for updates. Current known issues with JRE 7 in EBS environments Our current testing shows that there are known incompatibilities between JRE 7 and the Forms-invocation process in EBS environments.  We have been working directly with the Java division on this for a while now.  In the meantime, EBS customers should not deploy JRE 7 to their end-user Windows desktop clients. You should stick with JRE 1.6 for now.  But wait, you previously said... Older JRE certification announcements stated: Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and higher.  We test all new JRE releases in parallel with the JRE development process, so all JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE releases to your EBS users' desktops. Yes, this is true.  This standard boilerplate text was written before JRE 7 was released, so there was no possibility of misunderstanding.  With the availability of JRE 7, that boilerplate needs to be revised to read: Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline.  We test all new JRE 1.6 releases in parallel with the JRE development process, so all new JRE 1.6 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 releases to your EBS users' desktops. References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • List of common pages to have in the footer [closed]

    - by user359650
    I would like to post this question as a reference for webmasters wondering what pages they should include in the footer. I will use answers to complete my initial list: About us / About MyCompany / MyCompany About / About us: description about the company, its mission, and its vision. History: summary of milestones achieved by the company. The team / Management / Board of directors: depending on size of the company there may be one of more pages describing the people involved in the company, depending on their position. Awards: list of awards received by the company if any. In the press / They're talking about us: list of links to external websites, usually highly regarded news websites, which mentioned the company in one of their articles. Media Wallpapers: wallpapers with company logo in different colors and formats that fans can set as desktop image for their computer. logos: company logo in different colors and formats that websites/blogs posting about the website can use for illustration purposes. Media kits: documents, usually in PDF format summarizing the key company figures and facts that journalists can download and read to get a quick overview of the company. Misc Contact / Contact us: contact details the company is prepared to disclose if any (address, email, phone) or contact form. Careers / Jobs / Join us: list of open vacancies with contact form to apply. Investors / Partners / Publishers: information and contact forms for companies willing to become Investors/Partners/Publishers or login page to access portal restricted to those who already are. FAQ: list of common questions and answers to guide users and reduce number of support requests. Follow us / Community Facebook / Twitter / Google+: links to the company's pages/accounts on various social networks. Legal Terms / Terms of use / Terms & Conditions: rules users must follow when browsing the website. Privacy / Privacy Statement: explanations as to how the company deals with users' personal data and what users can do about it (request information to be deleted...). cookies: page that starts appearing on more and more websites due to new regulation (notably EU) imposing more transparency and control for users about cookies (e.g. BBC cookie page). Any input is welcome PS: if someone with enough rep could add the footer tag that would be great (min. 300 required).

    Read the article

  • ExplorerCanvas and JQuery

    - by PhubarBaz
    I am working on a Javascript app (CloudGraph) that uses HTML5 canvas and JQuery. I'm using ExplorerCanvas to support canvas in IE. I recently came across an interesting problem. What I was trying to do is restore the user's settings when the page is loaded. I read some information from a cookie that I set the last time the user accessed the application. One of these settings is the size of the canvas. I decided that the best place to do this would be when the document is ready using JQuery $(document).ready(). This worked fine in browsers that natively support the canvas element. But in IE it kept getting errors the first time I would hit the page. It seemed that the excanvas element wasn't initialized yet because I was getting null reference and unknown properties errors. If I refreshed the page the errors went away but the resized canvas wasn't drawing on the entire area of the canvas. It was like the clipping rectangle was still set to the default canvas size. I found that the canvas element when using excanvas has a div child element which is where the actual drawing takes place. When I changed the width and height of the canvas element in document.ready it didn't change the width and height of the child div. Initially my solution was to also change the div element when changing the canvas element and that worked. But then I realized that having to refresh the page every time I started the app in IE really sucked. That wouldn't be acceptable for users. Since it seemed like the canvas wasn't getting initialized before I was trying to use it I decided to try to initialize my app at a different time. I figured the next best place was in the onload event. Sure enough, moving my initialization to onload fixed all of the problems. So, it looks like the canvas shouldn't be manipulated until the onload event when using ExplorerCanvas. There might be ways to do it when the document is ready. I found some posts on initializing excanvas manually, but for me waiting until onload worked just fine.

    Read the article

  • How to temporarily save the result of the query, to use in another?

    - by Truth
    I have this problem I think you may help me with. P.S. I'm not sure how to call this, so if anyone finds a more appropriate title, please do edit. Background I'm making this application for searching bus transit lines. Bus lines are a 3 digit number, and is unique and will never change. The requirement is to be able to search for lines from stop A to stop B. The user interface is already successful in hinting the user to only use valid stop names. The requirement is to be able to display if a route has a direct line, and if not, display a 2-line and even 3-line combination. Example: I need to get from point A to point D. The program should show: If there's a direct line A-D. If not, display alternative, 2 line combos, such as A-C, C-D. If there aren't any 2-line combos, search for 3-line combos: A-B, B-C, C-D. Of course, the app should display bus line numbers, as well as when to switch buses. What I have: My database is structured as follows (simplified, actual database includes locations and times and whatnot): +-----------+ | bus_stops | +----+------+ | id | name | +----+------+ +-------------------------------+ | lines_stops_relationship | +-------------+---------+-------+ | bus_line | stop_id | order | +-------------+---------+-------+ Where lines_stops_relationship describe a many-to-many relationship between the bus lines and the stops. Order, signifies the order in which stops appear in a single line. Not all lines go back and forth, and order has meaning (point A with order 2 comes after point B with order 1). The Problem We find out if a line can pass through the route easily enough. Just search for a single line which passes through both points in the correct order. How can I find if there's a 2/3 line combo? I was thinking to search for a line which matches the source stop, and one for the destination stop, and see if I can get a common stop between them, where the user can switch buses. How do I remember that stop? 3 line combo is even trickier, I find a line for the source, and a line for the destination, and then what? Search for a line which has 2 stops I guess, but again, How do I remember the stops? tl;dr How do I remember results from a query to be able to use it again? I'm hoping to achieve this in a single query (for each, a query for 1-line routes, a query for 2, and a query for 3-line combos). Note: I don't mind if someone suggests a completely different approach than what I have, I'm open to any solutions. Will award any assistance with a cookie and an upvote. Thanks in advance!

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • Exchange-Server Query

    - by Rudi Kershaw
    First, a little background. I've recently been taken on as a web and software developer for a small company, who has no other in-house IT support. They've been asking my opinion on lots of IT subjects that are quite far out of my comfort zone. I'm definitely not a network admin. Their IT consultancy contractor is pushing them to upgrade their dedicated exchange server, even though it seems like the one they currently have has a lot of life left in it and is running problem free. They say it's "coming to the natural end of it's life". They want to install a monster with a Xeon E5-2420, 32GB RAM, 2x 1TB HDDs, Windows Server 2012 and Microsoft Exchange 2010. They want to charge a small fortune for it. Basically, this system seems massively over the top seeing as it won't be doing anything else other than running as an exchange server for a company with less than 25 email accounts. My employers also have a file server system in-house that hosts three web apps, an SQL server, their local domain, print server and shared folders. That machine is using the same specs as the proposed new one, and it is barely using any of it's potential. I asked if Microsoft Exchange 2010 could be installed on their file server, but they said that MS Exchange can't run on the same system as an SQL server because for some reason they will eat up each others resources (even though the SQL server isn't touching 1% of the current system's CPU or RAM). My question is really, are they trying to rip my employers off? Could MS Exchange be installed on their other server (on a virtual instance or not), or does the old one even need replacing at all? Going with their current suggestion will cost the company in excess of £6k, and it seems entirely unnecessary. I apologies, because I know this is probably a little thin on details, but if I carry on I could end up writing a massive essay that no-one will want to read. I've been doing my research, but I'm not knowledgeable enough make any hard decisions. Let me know if you need any more details. Thank you for any help you can offer. Further Details: The new exchange would need to support Outlook Web App, 25 users, a few public mailboxes, and email exchange with Blackberries.

    Read the article

  • Why is firefox 4 not HW accelerated?

    - by acidzombie24
    At first i thought it was my computer but then i tried chrome. Why isnt firefox not hardware accelerated? The first screenshot shows chrome at 23% usage. The 2nd shows 59%. I have 2 cpus which is why it isnt 100% usage. The game pictured is biolab Heres the text for about:support Application Basics Name Firefox Version 4.0 User Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0) Gecko/20100101 Firefox/4.0 Profile Directory Open Containing Folder Enabled Plugins about:plugins Build Configuration about:buildconfig Extensions Name Version Enabled ID Modified Preferences Name Value accessibility.typeaheadfind.flashBar 0 browser.places.importBookmarksHTML false browser.places.smartBookmarksVersion 2 browser.startup.homepage_override.buildID 20110303194838 browser.startup.homepage_override.mstone rv:2.0 extensions.lastAppVersion 4.0 gfx.font_rendering.directwrite.enabled true network.cookie.prefsMigrated true places.history.expiration.transient_current_max_pages 127602 privacy.sanitize.migrateFx3Prefs true Graphics Adapter Description Mobile Intel(R) 4 Series Express Chipset Family Vendor ID 8086 Device ID 2a42 Adapter RAM Unknown Adapter Drivers igdumd64 igd10umd64 igdumdx32 igd10umd32 Driver Version 8.15.10.2202 Driver Date 8-25-2010 Direct2D Enabled true DirectWrite Enabled true (6.1.7600.16385, font cache n/a) WebGL Renderer Google Inc. -- ANGLE -- OpenGL ES 2.0 (ANGLE 0.0.0.541) GPU Accelerated Windows 1/1 Direct3D 10

    Read the article

  • IIS 7 returns 304 instead of 200

    - by Ola Herrdahl
    I have a strange issue with IIS 7. Sometimes it seems to return a 304 instead of a 200. Here is a sample request captured with Fiddler: (Note that the file requested is not located in my browsers cache yet.) GET https://[mysite]/Content/js/jquery.form.js HTTP/1.1 Accept: */* Referer: https://[mysite]/Welcome/News Accept-Language: sv-SE User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; OfficeLiveConnector.1.4; OfficeLivePatch.1.3; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Host: [mysite] Connection: Keep-Alive Cache-Control: no-cache Cookie: ... Note that there is no If-Modified-Since or If-None-Match in the request. But still the response is: HTTP/1.1 304 Not Modified Cache-Control: public Expires: Tue, 02 Mar 2010 06:26:08 GMT Last-Modified: Mon, 22 Feb 2010 21:58:44 GMT ETag: "1CAB40A337D4200" Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 01 Mar 2010 17:06:34 GMT Does anyone have a clue of what could be wrong here? I'm running IIS 7 on Windows Web Server 2008 R2.

    Read the article

  • Zscaler. Certs, cookies, and port 80 traffic

    - by 54's_lol
    So I work at HQ for a large company that shall remain nameless. We use Zscaler and I had to roll out a 2048 cert per zscaler's request. People around me at work dont understand the technology and think that the cert's are what is allowing internet connectivity. From my understanding(and please chime in) is the cookie located C:\Users\$$$$$$4$$\AppData\Roaming\Macromedia\Flash Player#SharedObjects\Q3JQJQJV\gateway.zscaler.net\zscaler.swf here that gets created when you provide your creds the first time you use the browser. The cert's are just simply a way of inspecting the SSL traffic as zscaler had no way of doing this before without them. They are essentially using the classic MITM attack to parse your SSL traffic. Gmail is smart enough to recognize this as you get a warning. My question is this, is there a product or service that I can use to verify my web browser when at home(I.E. off company network) isn't still getting routed to zscaler's cloud? If i do a tracert that will work fine. It's the port 80 and 443 web traffic zscaler and my company is after. I would like to verify that when I'm off their premise that my web traffic is using only my isp and the path to whatever content I'm searching for. Do the cert's i'm pushing and browser authentication do something behind the curtain that forces web traffic to get routed to zscaler? I searched quite a bit and would very much like to know if I'm ever off company scrutiny. I do know zscaler offers the service to force the scenario im asking about. Can I prove how my web traffic is getting routed? Thanks for any insight. I've been a fan for a long time and your guy's kung fu is very strong:-)

    Read the article

  • Time machine disk icon on boot disk

    - by Ben Lings
    The icon for Macintosh HD (my boot disk) shows as a Time Machine disk. There is a file .com.apple.timemachine.supported in the root of the disk. If I delete the file and restart the computer, the icon goes back to a normal HD icon. However, the .com.apple.timemachine.supported file is recreated at some point on boot because when I log in again, the file has been recreated. If then reboot again, the icon goes back to being a Time Machine one. Any ideas about what is creating this file and why? More importantly - how can I get it to stop? It looks like something thinks the boot disk should be a Time Machine volume, but what? Console.app shows the following messages at approximately hourly intervals: 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Starting standard backup 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Cookie file is not readable or does not exist at path: /.<12 hex digits of MAC address for en0> 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Volume at path / does not appear to be the correct backup volume for this computer. (Cookies do not match) 19/01/2010 19:23:59 /System/Library/CoreServices/backupd[7459] Backup failed with error: 18 Other possibly relevant information: The boot HD isn't the original - the original failed so this is a SuperDuper'd clone of the original drive. I used to use the same disk for a SuperDuper clone as for Time Machine. These are the same same symptoms as this and this.

    Read the article

  • gzip specific files

    - by byTheDrop
    for some reason these files are not gzipping on my apache server, chrome network tab shows this. Is there a specific directive I can add to htaccess to cache these files? Compressing the following resources with gzip could reduce their transfer size by about two thirds (~680.45KB): adae8bc4c3cb52cbe22358aaced87a72.css could save ~607B css_f91fa8d73b5e7661d6dcf9e58395e533.css could save ~59.54KB jquery.min.js could save ~37.27KB drupal.js could save ~6.15KB auto_image_handling.js could save ~6.72KB lightbox.js could save ~29.38KB superfish.js could save ~2.42KB jquery.bgiframe.min.js could save ~1011B jquery.hoverIntent.minified.js could save ~1.05KB nice_menus.js could save ~581B panels.js could save ~531B jquery.pngFix.js could save ~2.98KB jquery.cycle.all.min.js could save ~20.20KB views_slideshow.js could save ~8.76KB views_slideshow.js could save ~9.02KB wanderlust_custom_videos.js could save ~598B wl_helper.js could save ~777B extlink.js could save ~2.88KB cufon-yui.js could save ~11.89KB googleanalytics.js could save ~1.48KB swfobject.js could save ~6.65KB jquery.jcarousel.min.js could save ~10.19KB jcarousel.js could save ~6.01KB Akzidenz_Grotesk_BE_Super_800.font.js could save ~14.27KB Akzidenz_Grotesk_BE_Bold_700.font.js could save ~12.96KB Akzidenz_Grotesk_BE_Cn_400.font.js could save ~13.39KB SuperCondensed_500.font.js could save ~24.40KB FuturaBold_700.font.js could save ~26.19KB Futura_500.font.js could save ~57.70KB SuperGroteskB_500.font.js could save ~23.86KB jquery.cookie.js could save ~1.25KB wanderlust.js could save ~1.69KB sliderbottom.js could save ~442B jcarousellite_1.0.1.min.js could save ~4.60KB jcarousellite_control.js could save ~224B sitesdropdown.js could save ~1.09KB widgets.js could save ~50.13KB cufon-drupal.js could save ~599B swfobject_api.js could save ~348B ga.js could save ~24.02KB all.js could save ~124.67KB tweet_button.1347008535.html could save ~38.43KB xd_arbiter.php could save ~16.80KB xd_arbiter.php could save ~16.80KB

    Read the article

  • OpenAM throwing 302 0 behind haproxy, nginx

    - by Travis
    I'm having some issues with my deployment and was wondering if you can help. My set up is as follows: 2 OpenAM servers are set up behind a load balancer (HAproxy). The load balancer is set up behind two reverse proxies (nginx). The two reverse proxies are ser up behind another load balancer (haproxy). So a request will go through Haproxy nginx Haproxy openam I can access the OpenAM web console through the reverse proxies without a problem. Everything works fine at this level. However when I access openam through the load balancer in front of the reverse proxies Openam throws a 302 error. The funny thing is however I can access the host/openam/UI/Login and login successfully. I even get the cookie and have access to my apps that are set up. However immediately after the login OpenAM throws a 302 redirect. I'm puzzled and cannot figure out what is going wrong. Does anyone have any idea? My config files are below: nginx config : server { listen 443; server_name oamlb1; location / { proxy_pass http://oamlb1.mydomain.com:8080; proxy_set_header X-Real-IP $remote_addr; } location /openam { proxy_pass http://oamlb1.mydomain.com:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host oamlb1.mydomain.com:8080; } } haproxy config : (This file is for the servers. The file for the reverse proxies is idenical except it points to the reverse proxies) listen http_proxy :8090 mode http balance roundrobin option httpclose option forwardfor server webA oamserver1.mydomain.com:18080 option forwardfor Thanks

    Read the article

  • WebSeal and jsp content updated by Ajax

    - by lior chaga
    Hey, I have a problem running an application on environment with WebSeal. It is a web application with Java server that contains many parts that are replcaed within the page according to user input. For instance - a form called Outer.jsp may contain a form:options combo-box (by spring-forms), that uppon selection of an option, a certain Div is updated with a content produced by a jsp and fetched by an Ajax call (the ajax impementation in the client is done by Prototype JavaScript framework 1.5.1.2). Let's call the content fetched by ajax - Inner.jsp So Outer.jsp is fetching Inner.jsp, which in turn uses js functions in files included by the Outer.jsp. This, I think, is where my problem starts - Inner.jsp is not familiar with any of the functions included in Outer.jsp. And so, almost any operation performed by Inner.jsp is failing miserably. Needless to say - this works perfect when running on environment without WebSeal. Note that the scripting is enabled in WebSeal junction (with the -J option). I also see that the content returned by the Ajax call includes a document.cookie added by WebSeal (not sure it matters to this problem) Can anyone assist? Thanks! Lior

    Read the article

  • How to automatically remove Flash history/privacy trail? Or stop Flash from storing it?

    - by Arjan van Bentem
    Many people have heard about third-party cookies, and some browsers even block those by default. Some people may even be using Private Browsing modes. However, only few seem to realise that Adobe's Flash player also leaves a cross-browser trail on your local hard drive, and allows for sending cookie-like information back to the server, including third-party sites. And because it is a plugin, Flash does not take any of the browser's privacy settings into account. Sorry for the long post, but first some details about why using Flash raises a privacy concern, followed by the results of my tests: The Flash player keeps a cross-browser history of the domain names of the Flash-sites your computer has visited. Unlike your browser's history, this history is not limited to a certain number of days. History is also recorded while using so-called Private Browsing modes. It is stored on your hard drive (though, as described below, without going to Adobe's site you won't know what is stored). I am not sure if any date and time information is kept about each visit, but to see the domain names: right-click on some Flash content, open the settings dialog, and click the Help icon or click the Advanced button within the Privacy tab. This opens a browser to the help pages on Adobe.com, where one can click through to the Website Storage Settings panel. One can clear the existing list, but one cannot stop it from being recorded again. Flash allows for storing data on your local hard drive, using so-called Local Shared Objects (aka "Flash Cookies"). Just like HTTP cookies, this data can be sent back to the server, for tracking purposes. They are cross-browser, have no expiration date, and no user defined maximum lifetime can be set in the Flash preferences either. These not being HTTP cookies, they are (of course) not blocked by a browser's cookies preferences and are not removed when the normal HTTP cookies are deleted. Adobe has announced that version 10.1 will obey Private Browsing in most popular browsers, but unfortunately no word about also removing the data whenever normal cookies are deleted manually. And its implementation might be confusing: [..] if the browser is in normal browsing mode when the Flash Player instance is created, then that particular instance will forever be in normal browsing mode (private browsing is turned off). Accordingly, toggling private browsing on or off without refreshing the page or closing the private browsing window will not impact Flash Player. Local Shared Objects are not limited to the site you visit, and third-party storage is enabled by default. At the Global Storage Settings panel one can deselect the default Allow third-party Flash content to store data on your computer. Because of the cross-browser and expiration-less nature (and the fact that few people know about it), I feel that the cross-browser third-party Flash Cookies are more dangerous for visitor tracking than third-party normal HTTP cookies. They are even used to restore plain HTTP cookies that the user tried to delete: "All advertisers, websites and networks use cookies for targeted advertising, but cookies are under attack. According to current research they are being erased by 40% of users creating serious problems," says Mookie Tenembaum, founder of United Virtualities. "From simple frequency capping to the more sophisticated behavioral targeting, cookies are an essential part of any online ad campaign. PIE ["Persistent Identification Element"] will give publishers and third-party providers a persistent backup to cookies effectively rendering them unassailable", adds Tenembaum. [..] To justify this tracking mechanism, UV's Tenembaum said, "The user is not proficient enough in technology to know if the cookie is good or bad, or how it works." When selecting None (zero KB) for Specify the amount of disk space that website websites that you haven't yet visited can use to store information on your computer, and checking Never ask again then some sites do not work. However, the same site might work when setting it to None but without selecting Never ask again, and then choose Deny whenever prompted. Both options would result in zero KB of data being allowed, but the behaviour differs. The plugin also provides a Flash Player cache for Adobe-signed files. I guess these files are not an issue. So: how to automatically delete that information? On a Mac, one can find a settings.sol file and a folder for each visited Flash-website in: $HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys/ Deleting the settings.sol file and all the folders in sys, removes the trail from the settings panels. However, the actual Local Shared Ojects are elsewhere (see Wikipedia for locations on other operating systems), in a randomly named subfolder of: $HOME/Library/Preferences/Macromedia/Flash Player/#SharedObjects But then: how to remove this automatically? Simply removing the folders and the settings.sol file every now and then (like by using launchd or Windows' Task Scheduler) may interfere with active browsers. Or is it safe to assume that, given the cross-browser nature, the plugin would not care if things are removed while it is active? Only clearing during log-off may not work for those who hibernate all the time. Firefox users can install BetterPrivacy or Objection to delete the Local Shared Objects (for all others browsers as well). I don't know if that also deletes the trail of website domain names. Or: how to stop Flash from storing a history trail? Change of plans: I'm currently testing prohibiting Flash to write to its own sys and #SharedObjects folders. So far, Flash has not tried to restore permissions (though, when deleting the folders, Flash will of course recreate them). I've not encountered any problems but this may take some while to validate, using multiple browsers and sites. I've not yet found a log that reports errors. On a Mac: cd "$HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer" rm -r sys/* chmod u-w sys cd "$HOME/Library/Preferences/Macromedia/Flash Player" # preserve the randomly named subfolders (only preserving the latest would suffice; see below) rm -r \#SharedObjects/*/* chmod -R u-w \#SharedObjects I guess the above chmods cannot be achieved on an old Windows system (I'm not sure about XP and Vista?). Though maybe on Windows one could replace the folders sys and #SharedObjects with dummy files with the same names? Anyone? Obviously, keeping Flash from storing those Local Shared Objects for all sites may cause problems. Some test results (Flash 10 on Mac OS X): When blocking the sys folder (even when leaving the #SharedObjects folder writable) then YouTube won't remember your volume settings while viewing multiple videos. Temporarily allowing write access to the blocked folders while visiting trusted sites (to only create folders for domains you like, maybe including references in settings.sol) solves that. This way, for YouTube, Flash could be allowed to write to sys/#s.ytimg.com and #SharedObjects/s.ytimg.com, while Flash could not create new folders for other domains. One may also need to make settings.sol read-only afterwards, or delete it again. When blocking both the sys and #SharedObjects folders, YouTube and Vimeo work fine (though they might not remember any settings). However, Bits on the Run refuses to even show the video player. This is solved by temporarily unblocking the #SharedObjects folder, to allow Flash to create a subfolder with some random name. Within this folder, it would create yet another folder for the current Flash website (content.bitsontherun.com). Removing that website-specific folder, and blocking both #SharedObjects and the randomly named subfolder, still seems to allow Bits on the Run to operate, even though it still cannot write anything to disk. So: the existence of the randomly named subfolder (even when write protected) is important for some sites. When I first found the #SharedObjects folder, it held many subfolders with random names, some created on the very same day. I wonder when Flash decides it wants a new folder, and how it determines (and remembers) that random name. For a moment I considered not blocking write access for sys and #SharedObjects, but explicitly creating read-only folders for well-known third-party tracking domains (like based on a list from, for example, AdBlock Plus). That way, any other domain could still create Local Shared Objects. But the list would be long, and the domains from AdBlock Plus are probably all third-party domains anyway, so disabling Allow third-party Flash content to store data on your computer might have the very same result. Any experience anyone? (Final notes: if the above links to the settings panels do not work in the future, then use the URL that is known to Flash player as a starting point: www.adobe.com/go/settingsmanager. See also "You Deleted Your Cookies? Think Again" at Wired.com -- which uses Flash cookies itself as well... For the very suspicious using Time Machine: you may want to exclude both folders, for each user, and remove the trace that is already on your backup.)

    Read the article

  • Openconnect for Cisco VPN doesn't recognize private key file - asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag

    - by Alexander Skwar
    I'm trying to use my Synology DS212 NAS box also act as VPN gateway to my companies VPN. Sadly, they only use Cisco ASA and to complicate stuff even further, we've got to use personal certificates (which is of course more secure, but more complicate to get going…). So I compiled OpenConnect v4.06 from http://www.infradead.org/openconnect/. As a very basic test, I tried to build a connection by manually invoking openconnect, passing along the key and cert files, like so: /lib/ld-linux.so.3 --library-path /opt/lib \ /opt/openconnect/sbin/openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 It fails :( Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Using client certificate '/[email protected]/OU=Company VPN' 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Loading private key failed (see above errors) Loading certificate failed. Aborting. Failed to open HTTPS connection to <ip> Failed to obtain WebVPN cookie When I run the same command with the same cert/key files on a Ubuntu 12.04 box, it works: openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Extra cert from cafile: '/CN=Company AG VPN CA/O=Company AG/L=Zurich/ST=ZH/C=CH' SSL negotiation with <ip> Server certificate verify failed: self signed certificate Certificate from VPN server "<ip>" failed verification. Reason: self signed certificate Enter 'yes' to accept, 'no' to abort; anything else to view: yes Connected to HTTPS on <ip> GET https://<ip>/ […] Well… The error on the NAS is this: 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Any ideas, what's causing this? On Syno, I use OpenConnect 4.06. On Ubuntu, I just compiled and installed to a custom location OpenConnect 4.06 as well. Thanks, Alexander

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >