Search Results

Search found 1397 results on 56 pages for 'cookies'.

Page 32/56 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • PHP Store User Data without Sessions

    - by Sev
    Is there anyway to store users data such as userid, email, etc to be accessible from all pages of a website after they have logged in, but without using sessions or cookies? For example: class User { var $userid; var $username; var $email; .. methods.. } after they login at login.php $currentUser = new User($_POST['username']) now, how do I access $currentUser from another page, such as index.php if I shouldn't use sessions or cookies at all? so that I could do the following in index.php: if ($currentUser->userid > -1) { echo "you are logged in as: " . $currentUser->username; } else { echo "click here to login"; } i asked a similar question before, here, but the answers didn't fulfill my needs.

    Read the article

  • login to any site using curl in php

    - by user550265
    login to a website using curl php I have tried $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "http://www.sitename.com"); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, true); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_COOKIEJAR, 'cookies.txt'); curl_setopt($ch, CURLOPT_COOKIEFILE, 'cookies.txt'); curl_setopt($ch, CURLOPT_POSTFIELDS, true); curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_BASIC ); curl_setopt($ch, CURLOPT_USERPWD, "username:password"); curl_exec($ch); curl_close($ch); ? But this does not log in .

    Read the article

  • how to maintain session in cURL in php?

    - by newbie programmer
    how can we maintain session in cURL? i'am having a code the sends login details of a site and logs in successfully i need to get the session maintained at the site to continue. here is my code that used to login to the site using cURL <?php $socket = curl_init(); curl_setopt($socket, CURLOPT_URL, "http://www.XXXXXXX.com"); curl_setopt($socket, CURLOPT_REFERER, "http://www.XXXXXXX.com"); curl_setopt($socket, CURLOPT_POST, true); curl_setopt($socket, CURLOPT_USERAGENT, $agent); curl_setopt($socket, CURLOPT_POSTFIELDS, "form_logusername=XXXXX&form_logpassword=XXXXX"); curl_setopt($socket, CURLOPT_COOKIESESSION, true); curl_setopt($socket, CURLOPT_COOKIEJAR, "cookies.txt"); curl_setopt($socket, CURLOPT_COOKIEFILE, "cookies.txt"); $data = curl_exec($socket); curl_close($socket); ?>

    Read the article

  • How to change Session for only one route in asp.net mvc?

    - by denis_n
    How to handle Application_BeginRequest using a custom filter in asp.net mvc? I want to restore session only for one route (~/my-url). It would be cool, if I could create a custom filter and handle that. protected void Application_BeginRequest(object sender, EventArgs e) { var context = HttpContext.Current; if (string.Equals("~/my-url", context.Request.AppRelativeCurrentExecutionFilePath, StringComparison.OrdinalIgnoreCase)) { string sessionId = context.Request.Form["sessionId"]; if (sessionId != null) { HttpCookie cookie = context.Request.Cookies.Get("ASP.NET_SessionId"); if (cookie == null) { cookie = new HttpCookie("ASP.NET_SessionId"); } cookie.Value = sessionId; context.Request.Cookies.Set(cookie); } }

    Read the article

  • How to set credential persistence permanent on Android

    - by doreamon
    My app has save login credential feature, so I store cookies for the next use after succeeding to sign in. However, after a time period, the session will be time out and cannot log in with the cookies any more. On iOS, after setting credential persistence to permanent, the app works nicely even after restarting the phone: [[challenge sender] useCredential:[NSURLCredential credentialWithUser:username password:password persistence:NSURLCredentialPersistencePermanent] forAuthenticationChallenge:challenge]; On Android, I cannot find out such kind of this option. Here is from my HttpHelper class: ((AbstractHttpClient) HttpHelper.client).getAuthSchemes().register("ntlm",new NTLMSchemeFactory()); NTCredentials creds = new NTCredentials(user, pass, "", domain); ((AbstractHttpClient) HttpHelper.client).getCredentialsProvider().setCredentials(AuthScope.ANY, creds); The server is SharePoint so I have to deal with ntlm authentication by following this instruction If you have idea, please let me know. Thank you.

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • A small, intra-app Object to String Serializer

    - by Rick Strahl
    On a few occasions I've needed a very compact serializer for small and simple, flat object serialization, typically for storage in Cookies or a FormsAuthentication ticket in ASP.NET. XML and JSON serialization are too verbose for those scenarios so a simple property serializer that strings together the values was needed. Originally I did this by hand, but here is a class that automates the process.

    Read the article

  • Google lance le service de Remarketing, qui permet aux annonceurs AdWords de suivre l'internaute par

    Google lance le service de Remarketing qui permet de suivre l'internaute partout à travers le web Google a annoncé sur le blog officiel d'AdWords le lancement de Remarketing, une nouvelle fonctionnalité pour les annonceurs AdWords, qui permettra aux annonceurs de suivre les internautes sur les différents sites qu'il consulte sur le net. [IMG]http://djug.developpez.com/rsc/google-adwords.jpg[/IMG] Le système AdSense Classique permet d'afficher des annonces ciblées en fonction du contexte des sites visités, la fonctionnalité Remarketing permet d'ajouter des cookies sur la machine de l'internaute afin de le suivre à travers le web pour réafficher les mêmes annonces sur les différents sites qu'il consulte,...

    Read the article

  • How much info can I store in a cookie?

    - by Artemix
    Hi guys, Im developing a flash game and I'd like to know how much info can I store in a browser cookie. The game is simple, but it needs to store several variables in order to save all the details of your current progress. The game is only one swf file, no server, no nothing. I need to know how should I use the cookies to achieve this, and if they have the posibility of doing it, of course. (several = 200 variables i.e)

    Read the article

  • Chrome Facebook Issue Ubuntu 11.10

    - by David Gaviria Piedrahita
    Facebook, navigating with Google Chrome 15.0.874.121 and using Ubuntu 11.10, when i try to comment, chat or give a "like it" the next blank page appears and don't let me do anything: http://www.facebook.com/ajax/ufi/modify.php I've tried with, based on what i found in internet: Erasing cookies and cache desynchronize chrome before erasing it Uninstalling chrome with: sudo apt-get purge google-chrome-stable Erasing manually: /.config/google-chrome directory And nothings solves the problem, Any ideas, would appreciate your help Thanks

    Read the article

  • Where to store users consent (EU cookie law)

    - by Mantorok
    We are legally obliged in a few months to obtain consent from users to allow us to store any cookies on the users PC. My query is, what would be the most effective way of storing this consent to ensure that users don't get repeat requests to give consent in the future, obviously for authenticated users I can store this against their profile. But what about for non-authenticated users. My initial thought, ironically, was to store given consent in a cookie..?

    Read the article

  • What should I aware of , when preparing a document of website for later maintenance use?

    - by user782104
    The development team has finished a website and my duty is to prepare a document so that other programmer can maintain the website with ease . As I am inexperience of that, I would like to ask what should be mentioned (document structure) in that report? So far my idea is only prepare a ERD diagarm for database and flow chart for each function. Any other suggestions, eg. what cookies stored ? Thanks

    Read the article

  • Cookie manager PHP

    - by HaCos
    I own a Joomla commerce store and although I use Google Analytics in order to track visitors, I need to install a cookie manager in order to be able to track cookies that were installed on customer when he punctuate an order. To be more specific , I am planning to join an affiliate network and I need somehow to track no only the last visit of a customer but if he has a cookie and from which affiliate network as well.

    Read the article

  • List of common pages to have in the footer [closed]

    - by user359650
    I would like to post this question as a reference for webmasters wondering what pages they should include in the footer. I will use answers to complete my initial list: About us / About MyCompany / MyCompany About / About us: description about the company, its mission, and its vision. History: summary of milestones achieved by the company. The team / Management / Board of directors: depending on size of the company there may be one of more pages describing the people involved in the company, depending on their position. Awards: list of awards received by the company if any. In the press / They're talking about us: list of links to external websites, usually highly regarded news websites, which mentioned the company in one of their articles. Media Wallpapers: wallpapers with company logo in different colors and formats that fans can set as desktop image for their computer. logos: company logo in different colors and formats that websites/blogs posting about the website can use for illustration purposes. Media kits: documents, usually in PDF format summarizing the key company figures and facts that journalists can download and read to get a quick overview of the company. Misc Contact / Contact us: contact details the company is prepared to disclose if any (address, email, phone) or contact form. Careers / Jobs / Join us: list of open vacancies with contact form to apply. Investors / Partners / Publishers: information and contact forms for companies willing to become Investors/Partners/Publishers or login page to access portal restricted to those who already are. FAQ: list of common questions and answers to guide users and reduce number of support requests. Follow us / Community Facebook / Twitter / Google+: links to the company's pages/accounts on various social networks. Legal Terms / Terms of use / Terms & Conditions: rules users must follow when browsing the website. Privacy / Privacy Statement: explanations as to how the company deals with users' personal data and what users can do about it (request information to be deleted...). cookies: page that starts appearing on more and more websites due to new regulation (notably EU) imposing more transparency and control for users about cookies (e.g. BBC cookie page). Any input is welcome PS: if someone with enough rep could add the footer tag that would be great (min. 300 required).

    Read the article

  • How to track site visitors across several browser sessions or computers using Google Analytics?

    - by Craig
    If a site visitor clears cookies, uses various browsers or computers then Google Analytics will have a hard time detecting them as being by the same user. However since 95% of the site content is only available when logged in, so I should be able to identify multiple visits as the same user so long as they log in. How can I identify to Google that the visits are by the same user? (without breaking the Terms and Conditions)

    Read the article

  • How to suspend a user from coming back on my website and register again? any ideas? [closed]

    - by ahmed amro
    i am an outsourcing person not a programmer and i am working on shopping website like ebay , so my question might be beginner for everyone.my website will need a user suspension in case he violates the terms and conditions. here is some thoughts on my mind: -IP address tracking -User information ( email address or any information are going to be repeated on second time of registration after suspension) -session Id cookies are also a way to identify the users after log in any more creative suggested ideas to avoid fraud and scammers, it it possible to make 100% impossible to avoid those bad users from coming back ?

    Read the article

  • L'accéléromètre : le cookie ultime pour le traçage mobile ? Des chercheurs montrent que les mouvements de votre mobile trahissent votre identité

    L'accéléromètre : le cookie ultime pour le traçage mobile ? Des chercheurs montrent que les mouvements de votre mobile trahissent votre identité Comme le souligne Adage, spécialiste de l'actualité de l'industrie publicitaire, l'une des limitations communes aux cookies est leur incapacité à suivre à la trace un smartphone. C'est cette réalité parmi tant d'autres qui à poussé des poids lourds de l'industrie de la technologie comme Microsoft, Google ou encore Apple à travailler sur des alternatives...

    Read the article

  • Is it possible (and practical) to search a string for arbitrary-length repeating patterns?

    - by blz
    I've recently developed a huge interest in cryptography, and I'm exploring some of the weaknesses of ECB-mode block ciphers. A common attack scenario involves encrypted cookies, whose fields can be represented as (relatively) short hex strings. Up until now, I've relied on my eyes to pick out repeating blocks, but this is rather tedious. I'm wondering what kind of algorithms (if any) could help me automate my search for repeating patterns within a string. Can anybody point me in the right direction?

    Read the article

  • Mac OS X: which folders should ClamXav Sentry watch?

    - by trolle3000
    I'm using ClamXav on my Mac. I've read this, and I am aware of the whole Macs-need-no-AV-but-they-do-anyway discussion. I guess that's why I would feel like a real jerk if I somehow managed to compromise my system! So ClamXav has been downloaded and ClamXav Sentry set up to start on log-in, but it doesn't really do anything before you tell it to. Specifically, you have to tell it which folders to watch for virusses/vira so I'm wondering, where are good places to look? Currently it's been set up to look the following places: In the home folder: ~/Downloads ~/Library/Caches ~/Library/Contextual Menu Items ~/Library/Cookies ~/Library/Internet Plug-Ins ~/Library/LaunchAgents In my system folder: /Library/Application Support /Library/Caches /Library/Contextual Menu Items /Library/Cookies /Library/Internet Plug-Ins /Library/LaunchAgents /Library/LaunchDaemons /Library/Startupitems Basically, this is 100% conjecture. All (most of) the folders have something to do with the Internet and things that start up automatically, so I'm guessing that's where vira go. But still, the qustion: Which folders should ClamXav Sentry watch, if any? FYI, I'm not using any mail applications, but please include that in your answer for anyone who might be interested.

    Read the article

  • Mac OSX: which folders should ClamXav Sentry watch?

    - by trolle3000
    I'm using ClamXav on my mac. I've read this, and I am aware of the whole macs-need-no-AV-but-they-do-anyway discussion. I guess that's why I would feel like a real ass if I somehow managed to compromise my system! So ClamXav has been downloaded and ClamXav Sentry set up to start on log-in, but it doesn't really do anything before you tell it to. Specifically, you have to tell it which folders to watch for virusses/vira so I'm wondering, where are good places to look? Currently it's been set up to look the following places: In the home folder: ~/Downloads ~/Library/Caches ~/Library/Contextual Menu Items ~/Library/Cookies ~/Library/Internet Plug-Ins ~/Library/LaunchAgents In my system folder: /Library/Application Support /Library/Caches /Library/Contextual Menu Items /Library/Cookies /Library/Internet Plug-Ins /Library/LaunchAgents /Library/LaunchDaemons /Library/Startupitems Basically, this is 100% conjecture. All (most of) the folders have something to do with internet and things that start up automatically, so I'm guessing that's where vira go. But still, the qustion: Which folders should ClamXav Sentry watch, if any? FYI, I'm not using any mail app's, but please include that in your answer for anyone who might be interested. Cheers!

    Read the article

  • Apache web logs "GET http/1.1" 200 error?

    - by songdogtech
    I've got some puzzling (at least to me) errors that show up in my webserver logs. What's puzzling is that they show my error page being referenced, but that error page doesn't show up as being displayed in hits on statcounter.com. Can someone tell me the difference in these two kinds of errors? Or is only one a "real" error? This is in WordPress, and I have my 404.php file in my theme redirecting to a static Wordpress page at mydomain.com/error/ These first two log entries show /error/, but a "hit" on my /error/ page doesn't show in my webstats. The page referenced - /2009/08/delete-preference-files/ - exists, and shows a "hit" at statcounter. 92.17.232.242 - - [13/Mar/2010:01:45:43 -0800] "GET /error/ HTTP/1.1" 200 4012 "http://markratledge.com/2009/08/delete-preference-files/" "Mozilla/5.0 (Windows;U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)" 732 4432 92.17.232.242 - - [13/Mar/2010:01:47:11 -0800] "GET /error/ HTTP/1.1" 200 4012 "http://markratledge.com/wp-content/themes/default/style.css" "Mozilla/5.0(Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)" 861 4432 These are two error entries that display my /error/ page, because /error/ gets a "hit" in statcounter. The page /cookies/ doesn't exist. 72.174.16.202 - - [13/Mar/2010:10:53:37 -0800] "GET /cookies HTTP/1.1" 302 21 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729)" 394 642 72.174.16.202 - - [13/Mar/2010:10:53:38 -0800] "GET /error/ HTTP/1.1" 200 4012 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729)" 393 4432

    Read the article

  • Alias using Nginx causing phpMyAdmin login endless loop

    - by Seb Dangerfield
    Recently I've been trying to set up a web server using Nginx (I normally use Apache). However I've ran into a problem trying to set phpMyAdmin up on an alias. The alias correctly takes you too the phpMyAdmin login screen, however when you enter valid credentials and hit go you end up back on the login screen with no errors. Sounded like a cookie or session problem to me... but if I symlink the phpMyAdmin directory and try logging in through the symlinked version it works fine! Both the symlink and the alias one set the same number of cookies and both set seem to set the cookies for the correct domain and path. My Nginx config for the php alias is as follows: location ~ ^/phpmyadmin/(.*\.php)$ { alias /usr/share/phpMyAdmin/$1; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $request_filename; } I'm running Nginx 0.8.53 PHP 5.3.3 MySQL 5.1.47 phpMyAdmin 3.3.9 - self install And php-mcrypt is installed. Has anyone else experienced this behaviour before? Anyone have any ideas about how to fix it?

    Read the article

  • Download JDK onto a remote server

    - by itsadok
    I want to get the latest JDK onto a server in a remote location. Downloading the JDK from Sun's website requires jumping through all kinds of hoops until you actually get the file. I'm not sure exactly if they use cookies or my IP address, but simply copying the file URL and trying wget on the server doesn't work. Googling for mirrors of the JDK, I could only find old versions. Right now I'm left with the option of downloading it into my computer, then uploading it to the server. This feels slow and stupid. Anyone got a better idea? EDIT: Thanks for all the replies. Just to clarify, as I'm writing this I'm rsyncing the 78MB file to my server. It should be done in about an hour, so it's not such a big deal. However, since this is not the first time I'm doing this, I was hoping for a better solution for next time. Solution: What I ended up doing was sudo aptitude install lynx-cur www-browser http://java.sun.com/javase/downloads/ From there it's mostly using the arrow and enter keys, and answering "Yes" to a lot of lynx security questions (about cookies and certificates). Thanks to resonator.

    Read the article

  • Why do I have to log into hotmail twice?

    - by Tony Lee
    I just recently noticed I have to attempt a login into hotmail twice before it succeeds. Although I'm using Google Chrome (3.0.195.21), the symptoms are well described in a Mozilla thread. In short, I'm told: The e-mail address or password is incorrect. Please try again. The thread on mozilla's site that supposed to describe the latest details (and the 1st hit on google when I search for "hotmail login twice") requires an account to read so I'm hoping someone here has a good synopsis of what the cause is. I normally start at hotmail.com, which redirects to login.live.com/.... I can login by starting at mail.live.com, using IE8 or attempting a 2nd login. Oddly, if I start at login.live.com Chrome tells me there is a redirect loop. Does anyone know or have a public link to the root cause of the double login? (it is a hotmail account I'm login into) EDIT - Caused by my 'restricting how 3rd parties can use cookies'. If I allow all cookies, it works first time.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >