Search Results

Search found 13797 results on 552 pages for 'browser madness'.

Page 221/552 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Client Side Only Cookies

    - by Mike Jones
    I need something like a cookie, but I specifically don't want it going back to the server. I call it a "client side session cookie" but any reasonable mechanism would be great. Basically, I want to store some data encrypted on the server, and have the user type a password into the browser. The browser decrypts the data with the password (or creates and encrypts the data with the password) and the server stores only encrypted data. To keep the data secure on the server, the server should not store and should never receive the password. Ideally there should be a cookie session expiration to clean up. Of course I need it be available on multiple pages as the user walks through the web site. The best I can come up with is some sort of iframe mechanism to store the data in javascript variables, but that is ugly. Does anyone have any ideas how to implement something like this? FWIW, the platform is ASP.NET, but I don't suppose that matters. It needs to support a broad range of browsers, including mobile. In response to one answer below, let me clarify. My question is not how to achieve the crypto, that isn't a problem. The question is where to store the password so that it is persistent from page to page, but not beyond a session, and in such a way that the server doesn't see it.

    Read the article

  • Redirect requests only if the file is not found?

    - by ZenBlender
    I'm hoping there is a way to do this with mod_rewrite and Apache, but maybe there is another way to consider too. On my site, I have directories set up for re-skinned versions of the site for clients. If the web root is /home/blah/www, a client directory would be /home/blah/www/clients/abc. When you access the client directory via a web browser, I want it to use any requested files in the client directory if they exist. Otherwise, I want it to use the file in the web root. For example, let's say the client does not need their own index.html. Therefore, some code would determine that there is no index.html in /home/blah/www/clients/abc and will instead use the one in /home/blah/www. Keep in mind that I don't want to redirect the client to the web root at any time, I just want to use the web root's file with that name if the client directory has not specified its own copy. The web browser should still point to /clients/abc whether the file exists there or in the root. Likewise, if there is a request for news.html in the client directory and it DOES exist there, then just serve that file instead of the web root's news.html. The user's experience should be seamless. I need this to work for requests on any filename. If I need to, for example, add a new line to .htaccess for every file I might want to redirect, it rather defeats the purpose as there is too much maintenance needed, and a good chance for errors given the large number of files. In your examples, please indicate whether your code goes in the .htaccess file in the client directory, or the web root. Web root is preferred. Thanks for any suggestions! :)

    Read the article

  • CSS/Javascript: multiple columns

    - by Patrick
    hi, I'm looking for a columnizer plugin (making columns of my small divs). It is very important it has the following features: 1) It has to be as light as possible (if it is only css would be great, but I guess it is difficult make it work on IE then...) 2) It has to be cross-browser (I don't need IE6... IE7 and IE8 compatibility is required). 3) The divs has not to be broken. In other terms, the nodes have to be moved to next block but not splitted in 2. The nodes are div elements, they might include other divs, images and text. 4) The column have to have a fixed width and fixed margin. This means that when I resize the browser, and new columns are created (become the window becomes wider), the new columns have to rigidly keep the same width and distance between them. (margin:20px) (width:200px) Would be great to have some css.. but I'm afraid I need some jQuery plugin because I need all 4 features being supported. I found several plugins and css styleshits with very good solutions, but I couldn't find a complete one. Thanks

    Read the article

  • xpath evaluting error in andorid

    - by R_Dhorawat
    i'm running one application in android browser which contain the following code.. [ if (typeof XPathResult != "undefined") { //use build in xpath support for Safari 3.0 //alert("xpathExpr"+xpathExpr); //alert("doc"+doc); var xmlDocument = doc; if (doc.nodeType != 9) { xmlDocument = doc.ownerDocument; } results = xmlDocument.evaluate(xpathExpr,doc, function(prefix) { return namespaces[prefix] || null;}, XPathResult.ANY_TYPE, null ); var thisResult; result = []; var len = 0; do { thisResult = results.iterateNext(); if (thisResult) { result[len] = thisResult; len++; } } while ( thisResult ); } else { try{ if (doc.selectNodes) { result = doc.selectNodes(xpathExpr); } }catch(ex){} } return result; ] but when i run this app in Firefox control come in if statement and everything works fine.. but in android browser it's giving error ... XPathResult undefined... this time control come to else statement and even here it's showing that selectNodes is undefind and. so the result come as null whereas in Firefox it's giving list of nodes.. realy need it to be done ... help needed.. thanks...

    Read the article

  • C# threading solution for long queries

    - by Eddie
    Senerio We have an application that records incidents. An external database needs to be queried when an incident is approved by a supervisor. The queries to this external database are sometimes taking a while to run. This lag is experienced through the browser. Possible Solution I want to use threading to eliminate the simulated hang to the browser. I have used the Thread class before and heard about ThreadPool. But, I just found BackgroundWorker in this post. MSDN states: The BackgroundWorker class allows you to run an operation on a separate, dedicated thread. Time-consuming operations like downloads and database transactions can cause your user interface (UI) to seem as though it has stopped responding while they are running. When you want a responsive UI and you are faced with long delays associated with such operations, the BackgroundWorker class provides a convenient solution. Is BackgroundWorker the way to go when handling long running queries? What happens when 2 or more BackgroundWorker processes are ran simultaneously? Is it handled like a pool?

    Read the article

  • Is it still true, to make cross broswer layouts for desktop browsers using table+css is easier then

    - by metal-gear-solid
    My one of web designer friend still making sites with table but he use css very nicely and I also use css nicely but with <div> and i face cross browser problem in layout more than my friend. and i given some reason to my friend about cons of <table>. read my whole discussion with friend? I - you site will be problematic with screen reader My friend - OK, but i never got any call from any client regarding this. I - you will devote more time to make any changes in layout, if changes comes from client My friend - I don't think so, but if it is then show me how can i save time with <div>? I - your sites will not work well with search engine. My friend - it's not true. I've made many site and no problem with any site or client regarding this I - layout is old way, non w3c and non standard way. My friend - what is old and what is new, Who is W3C i don't know, What is standard? Whatever i make works in all browsers, it's enough for me and my client will not pay for standard and W3C guidelines rules I - Your site will not work in mobile browsers My friend - No problem for me, my client don't care about mobile phone I - Your sites are not accessible? My Friend - What do u mean not accessible? Whatever i make works in all browsers. my any client never asked about accessibility I - You will not get more work in future, with table? My friend - OK, no problem when clients will not accept site with table then i will learn about div based layouts in future. My questions? Is it still true, to make cross browser layouts for desktop browsers using table+css is easier then div+css? What is the benefit for developer to use DIV+CSS layout in place of <table> layouts if client would not mind if i use ?

    Read the article

  • Dom Traversal to Automate Keyboard Focus - Spatial Navigation

    - by Steve
    I'm going to start with a little background that will hopefully help my question make more sense. I am developing an application for a television. The concept is simple and basically works by overlaying a browser over the video plane of the TV. Now being a TV, there is no mouse or additional pointing device. All interaction is done through a remote control. Therefore, the user needs to be able to visually tell which element they are currently focused upon. To indicate that an element is focused, I currently append a colored transparent image over the element to indicate focus. Now, when a user hits the arrow keys, I need to respond by focusing on the correct elements according to the key pressed. So, if the down arrow is pressed I need to focus on the next focusable element in the DOM tree (which may be a child or sibling), and if they hit the up arrow, I need to respond to the previous element. This would essentially simulate spatial navigation within a browser. I am currently setting an attribute (focusable=true) on any DOM elements that should be able to receive focus. What I would like to do is determine the previous or next focusable element (i.e. attribute focusable=true) and apply focus to the element. I was hoping to traverse the DOM tree to determine the next and previously focusable elements, but I am not sure how to perform this in JQuery, or in general. I was leaning towards trying to use the JQuery tree travesal methods like next(), prev(), etc. What approach would you take to solve this type of issue? Thanks

    Read the article

  • Why doesn't this short php script send email?

    - by RoryG
    I can't seem to get my php script to send email. <?php echo "Does this page work?"; mail('my email address', 'test subject', 'test message'); ?> First, I have set the mail function settings in the php.ini file as follows: I checked my email account settings on outlook. It does not require authentication, its port is 25, and its type of encrypted connection is 'Auto'. Given this I configured my php.ini file accordingly: SMTP = ssl://smtp1.iis.com smtp_port = 25 Then I set: sendmail_from: my email address The echo statement prints out in the browser, so I know the php file is recognized and processed. But the browser also shows the following error: Warning: mail() [function.mail]: "sendmail_from" not set in php.ini or custom "From:" header missing in C:\xampp\htdocs\mailtest.php on line 3 I have clearly set the sendmail_from so I don't know what else to do. I have also tried removing the 'ssl://' part from the SMTP setting in the php.ini file, and configuring the php5.ini file. Which of these .ini files should I be configuring anyways?

    Read the article

  • Asp.Net Cookie sharing

    - by SH
    This is C#.Net code: How to share Cookie between 2 HttpWebRequest calls? Details: I am posting a form in first request, this form contains some setting variables which are used by the system. lets say there is a input field in the form which sets the size of grid pages to be displayed in other pages. Once i have updated the setings in previous request, i go to send a request to another page which shows off asp.net gridview/grid. The grid might contaian several pages and the page size should be the one which i set in previous request. But when i do this via HttpWebReeust it does not happen. When i do it via browser, loading the setting page in the browser and then going to the grid view page... i see the page size is updated. I want to achieve this via code. Sicne i am scraping this grid. i have to set page size or visit the gird pages one by one via code. Or is it possible to set a cookie on 2nd request which is used to set in first request? It will be great if i go this way. any solution?

    Read the article

  • Android EVO4G SenseUI Flash Lite 4 cookie problem

    - by cmurray
    Got the EVO4G today, watching it run Actionscript3 out of the box was EXTREMELY cool. Ran into a problem though. When I connect to a server which creates an HTTP session and hands a cookie to my application, subsequent calls from my client to the server do not have the cookie attached to the HTTP request. That causes the server to invalidate the session and my user is logged out. This appears to be a bug between the Flash Lite 4 player and the SenseUI/browser running in Android 2.1 for the EVO4G. This same application works on other platforms, including the HTC HERO if compiled for flash lite 2. If I hardcode my HTTP requests in the browser address bar, the cookies work, so I know cookies are working on the phone. But when my application is running in the Flash Player, the cookies are not working. I realize this may not be the best forum for this question, so if you cant answer or help me, if you could give me some more appropriate forums to ask on, I would appreciate that. Thanks!

    Read the article

  • Client side page call/scrape?

    - by Silvre
    Here is the problem: I have a web application - a frequently changing notification system - that runs on a series of local computers. The application refreshes every couple of seconds to display the new information. The computers only display info, and do not have keyboards or ANY input device. The issue is that if the connection to the server is lost (say updates are installed and a server must be rebooted), a page not found error is displayed). We must then either reboot all computers that are running this app, OR add a keyboard and refresh the browser, OR try to access each computer remotely and refresh the browser. None of these are good options and result in a lot of frustration. I cannot change the actual application OR server environment. So what I need is some way to test the call to the application, and if an error is returned or it times out, continue trying every minute or so until the connection is reestablished. My idea is to create a client-side page scraper, that makes a JS request to the application (which displays basic HTML), and can run locally on the machine, no server required. If the scrape returns the correct content, it displays it. If not it continues to request the page until the actual page content is returned. Is this possible? What is the best way to do it?

    Read the article

  • RIA Service/oData ... "Requests that attempt to access a single element using key values from a resu

    - by user327911
    I've recently started working up a sample project to play with an oData feed coming from a RIA service. I am able to view the feed and the metadata via any web browser, however, if I try to perform certain query operations on the feed I receive "unsupported" exceptions. Sample oData feed: ProductSet http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet/ 2010-04-28T14:02:10Z http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128') 2010-04-28T14:02:10Z b0a2b170-c6df-441f-ae2a-74dd19901128 Product 0 Type 1 Active Sample web.config entry: Sample service: [EnableClientAccess()] public class ProductService : DomainService { [Query(IsDefault = true)] public IQueryable GetProducts() { IList products = new List(); for (int i = 0; i < 90; i++) { Product product = new Product { Id = Guid.NewGuid(), Name = "Product " + i.ToString(), ProductType = i < 30 ? "Type 1" : ((i > 30 && i < 60) ? "Type 2" : "Type 3"), Status = i % 2 == 0 ? "Active" : "NotActive" }; products.Add(product); } return products.AsQueryable(); } } If I provide the url "http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128')" to my web browser I receive the following xml: Requests that attempt to access a single element using key values from a result set are not supported. I'm new to RIA and oData. Could this be something as simple as my web browsers not supporting this type of querying on the result set or something else? Thanks ahead! Corey

    Read the article

  • `this` in global scope in ECMAScript 6

    - by Nathan Wall
    I've tried looking in the ES6 draft myself, but I'm not sure where to look: Can someone tell me if this in ES6 necessarily refers to the global object? Will this object have same members as the global scope? If you could answer for ES5 that would be helpful as well. I know this in global scope refers to the global object in the browser and in most other ES environments, like Node. I just want to know if that's the defined behavior by the spec or if that's extended behavior that implementers have added (and if this behavior will continue in ES6 implementations). In addition, is the global object always the same thing as the global scope? Or are there distinctions? Update - Why I want to know: I am basically trying to figure out how to get the global object reliably in ES5 & 6. I can't rely on window because that's specific to the browser, nor can I rely on global because that's specific to environments like Node. I know this in Node can refer to module in module scope, but I think it still refers to global in global scope. I want a cross-environment ES5 & 6 compliant way to get the global object (if possible). It seems like in all the environments I know of this in global scope does that, but I want to know if it's part of the actual spec (and so reliable across any environment that I may not be familiar with). I also need to know if the global scope and the global object are the same thing by the spec. In other words will all variables in global scope be the same as globalobject.variable_name?

    Read the article

  • Asp.Net 2 integrated sites How to Logout second site programatically.

    - by NBrowne
    Hi , I am working with an asp.net 2.0 site (call it site 1) which has an iframe in it which loads up another site (site2) which is also an asp.net site which is developed by our team. When you log onto site 1 then behind the scenes site 2 is also logged in so that when you click the iframe tab then this displays site 2 with the user logged in (to prevent the user from having to log in twice). The problem i have is that when a user logs out of site 1 then we call some cleanup methods to perform FormsAuthentication.SignOut and clean session variables etc but at the moment no cleanup is called when the user on site 2. So the issue is that if the user opens up Site 2 from within a browser then website 2 opens with the user still logged in which is undesired. Can anyone give me some guidance as to the best approach for this?? One possible approach i though of was just that on click of logout button i could do a call to a custom page on Site 2 which would do the logout. Code below HttpWebRequest request; request = ((HttpWebRequest)(WebRequest.Create("www.mywebsite.com/Site2Logout.aspx"))); request.Method = "POST"; HttpCookie cookie = HttpContext.Current.Request.Cookies[FormsAuthentication.FormsCookieName]; Cookie authenticationCookie = new Cookie( FormsAuthentication.FormsCookieName, cookie.Value, cookie.Path, HttpContext.Current.Request.Url.Authority); request .CookieContainer = new CookieContainer(); request .CookieContainer.Add(authenticationCookie); response.GetResponse(); Problem i am having with this code is that when i run it and debug on Site 2 and check to see if the user is Authenticated they are not which i dont understand because if i open browser and browse to Site 2 i am Still authenticated. Any ideas , different direction to take etc ??? Please let me know if you need any more info or if i something i have said dosent make sense. Thanks

    Read the article

  • Python - Problems using mechanize to log into a difficult website

    - by user1781599
    × 139886 I am trying to log in to betfair.com by using mechanize. I have tried several ways but it always fail. This is the code I have developed so far, can anyone help me to identify what is wrong with it and how I can improve it to log into my betfair account? Thanks, import cookielib import urllib import urllib2 from BeautifulSoup import BeautifulSoup import mechanize from mechanize import Browser import re bf_username_name = "username" bf_password_name = "password" bf_form_name = "loginForm" bf_username = "xxxxx" bf_password = "yyyyy" urlLogIn = "http://www.betfair.com/" accountUrl = "https://myaccount.betfair.com/account/home?rlhm=0&" # This url I will use to verify if log in has been successful br = mechanize.Browser(factory=mechanize.RobustFactory()) br.addheaders = [("User-Agent","Mozilla/5.0 (Macintosh; Intel Mac OS X 10_5_8) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.90 Safari/537.1")] br.open(urlLogIn) br.select_form(nr=0) print br.form br.form[bf_username_name] = bf_username br.form[bf_password_name] = bf_password print br.form #just to check username and psw have been recorded correctly responseSubmit = br.submit() response = br.open(accountUrl) text_file = open("LogInResponse.html", "w") text_file.write(responseSubmit.read()) #this file should show the home page with me logged in, but it show home page as if I was not logged it text_file.close() text_file = open("Account.html", "w") text_file.write(response.read()) #this file should show my account page, but it should a pop up with an error text_file.close()

    Read the article

  • Very Urgent :How to start a new session if a user click on the new tab in IE or mozilla on websphere

    - by ha22109
    Hi, I have one "user search" portlet on the home page of one application running on websphere portal server.Which display the matching user records as per the search criteria filled in the search form.I have requirement to have a "back to search input" link on the results page which onclick should show the filled form on the input jsp. The issue which i am facing is if i open the application in two diff tab of same IE browser and start giving some search criteria and submit and same time search for some other input from other IE tab (in the same browser)and then go back to previous tab and click on "back to search input" link then instead of showing me the first input it will show me the imput which i entered in the next IE tab. I am setting and getting the bean(form bean) through portlet session.but in the two diff tab of same IE it will be the sae user session (and may be the same portlet session..) Please tell me solution for this. The one thing to be notice here is i can access this "user search" application even without doing login also.so it must be taking the default portlet session in this case. what wil happen once i login and then search,will it going to overwrite the portlet session and http session or howz is that?

    Read the article

  • AJAX/JSONP Question. Access id denied using IE while requesting corss domain.

    - by Sisir
    Ok, Here we go. I have already searched the Stack for the answer i have found some useful info but i want to clear up some more things. I also search the net for the answer but no real help. I have worked with some api (yelp, ouside.in). In yelp i use to inject the script to head with the url request to the api with a callback funcion. I worked fine in all browsers. But while using outside.in api when i call the url the callback in not working. In yelp they have a url field can be used like that callback=callbackfuncion so the callback will automatically called. But in outside.in there is not such field available. Is there are any standard command for callback function which will work regardless of any server/api? I also tried a standard ajax request using jQuery $.ajax() function. It worked for my local pc for both IE and other browser but did not working in IE showing the error: access denied, other borwser seems ok. Firebug in my FF also don't notice any errors. Outside.in has an javascript example but it is too hard to me to understand github.com/outsidein/api-examples/tree/master/javascript/browser/ site i am working: http://citystir.com yelp: yelp.com outside.in: outside.in Techniqual info: i am using: wampserver in local, wordpress for hosting, Godaddy, apache for remote with linux. Codes: Using Jquery $.ajax url is like: "http://hyperlocal-api.outside.in/v1.1/states/Illinois/cities/chicago/stories?dev_key="+key+"&sig="+signeture+"&limit=3 function makeOutsideRequest(url){ $.ajax({ url: url, dataType: 'json', type: 'GET', success: function (data, status, xhr) { if (data == null) { alert("An error occurred connecting to " + url + ". Please ensure that the server is running and configured to allow cross-origin requests."); }else{ printHomeNews(data); } }, error: function (xhr, status, error) { alert("An error occurred - check the server log for a stack trace."); } }); } Thanks!

    Read the article

  • html in do_GET() method of a simple Python webserver

    - by Meeri_Peeri
    I am relatively new to Python but have been doing a lot of different things with it recently and I am liking it a lot. However, I ran into trouble/block with the following code. import http.server import socketserver import glob import random class Server(http.server.SimpleHTTPRequestHandler): def do_GET(self): self.send_response(200, 'OK') self.send_header('Content-type', 'html') self.end_headers() self.wfile.write(bytes("<html> <head><title> Hello World </title> </head> <body>", 'UTF-8')) images = glob.glob('*.jpg') rand = random.randint(0,len(images)-1) imagestring = "<img src = \"" + images[rand] + "\" height = 1028 width = 786 align = \"right\"/> </body> </html>" self.wfile.write(bytes(imagestring, 'UTF-8')) def serve_forever(port): socketserver.TCPServer(('', port), Server).serve_forever() if __name__ == "__main__": Server.serve_forever(8000) What I am trying to do here is grab a random image from multiple images in the directory and add it into the response to a web request. The code works fine but when I access the server via browser, the image is not displayed. The html of the page is as intended though. The permissions on the files are 755. Also I tried to create an index.html file in the do_GET method. That didn't work either. I mean the index.html was generated fine, but the response in the browser this time did not show anything (not even the hello world in the title). Am I missing anything very simple here? I was thinking should I overload the handle_request of the underlying SocketServer.BaseServer as the documentation says you should never override BaseHTTPServer's handle() method and should rather override the corresponding do_* method?

    Read the article

  • Hide elements for current visit only

    - by deanloh
    I need to display a banner that sticks to the bottom of the browser. So I used these codes: $(document).ready(function(){ $('#footie').css('display','block'); $('#footie').hide().slideDown('slow'); $('#footie_close').click(function(){ $('#footie_close').hide(); $('#footie').slideUp('slow'); }); }); And here's the HTML: <div id="footie"> {banner here} <a id="footie_close">Close</a> </div> I added the close link there to let user have the option to close the banner. How ever, when user navigates to next page, the banner shows up again. What can I do to set the banner to remained hidden just for this visit? In other words, as long as the browser remained open, the banner will not show up again. But if user returns to the same website another time, the banner should load again. Thanks in advance for any help!

    Read the article

  • get_browser not working

    - by tazphoenix
    it's not working.i mean i have many scripts to get ip and os but anyway get_browser is internal function and should work but its not.when i try to get a print_r on the function i get. Array ( [browser_name_regex] => §^.*$§ [browser_name_pattern] => * [browser] => Default Browser [version] => 0 [majorver] => 0 [minorver] => 0 [platform] => unknown [alpha] => [beta] => [win16] => [win32] => [win64] => [frames] => 1 [iframes] => [tables] => 1 [cookies] => [backgroundsounds] => [cdf] => [vbscript] => [javaapplets] => [javascript] => [activexcontrols] => [isbanned] => [ismobiledevice] => [issyndicationreader] => [crawler] => [cssversion] => 0 [supportscss] => [aol] => [aolversion] => 0 ) I'm using win7 and firefox.

    Read the article

  • session timeout prompt asp.net

    - by renathy
    The application I am using is implementing some session timeout prompt using jquery. There is a timer that counts and if there is no user activity after predefined X minutes it shows user prompt (Your session will end soon... Continue or Logout). It uses the approach found here - http://www.codeproject.com/Articles/227382/Alert-Session-Time-out-in-ASP-Net. However, this doesn't work if user opens new tab: 1) User logs in, timer starts counting user inactivity's. 2) User clicks some link that opens in new window (for example, in our case it is a long report running). Second tab is active, there is some response (crossbacks / postbacks that doesn't end session). 3) Second browser tab is active, there is some activity that doesn't end session. 4) However, first browser tab is inactive and counter is "thinking" that session should be closed, it displays appropriate message and then logout user. This is not what we want. So the given approach is just some session timeout fix, but if user is active in another tab, then application will logout user anyway. That is not the desired thing. We have a Report Page. It functions so that it opens report in a new tab/window. And it could be run quite long. Report section take care of some callbacks, so session wont end in this tab. However, it would end in the second tab.

    Read the article

  • I have a slight confusion with setting up Mercurial on my webserver...

    - by littlejim84
    I'm starting to use Mercurial on my web server (in this case MediaTemple's Grid). I've used SVN previously, though I'm not an expert of version control systems. I'm just needing a little help with clearing up some confusion with getting it set up optimally. I have a 'data' folder which is outside the web server root and that the browser cannot access. It was recommended to me before to have my Mercurial repositories setup here, then I would clone from here locally on my computer. I would also have a 'domains' folder that is basically the web server root and inside there is my actual domains where my websites are actually served to the browser - these would need to be updated from the 'data' repositories too. But with this in mind, after setting it up, it seems inefficient... I'm cloning to my local (that makes sense), adding, committing, pushing. That's fine... But then I'm then updating in my data repository folder and then updating in my domains folder to actually update my websites. Surely, I don't actually need this 'data' folder for repositories? Wouldn't my actual live 'domains' folders be the main repositories themselves? So I'm cloning locally and updating from these? Please help me clear some confusions with all this (if you can).

    Read the article

  • getting windows username with javascript

    - by jbkkd
    I have a site which is built in ASP.net and C#. Let's call it webapp. it uses a Form system to log on into it, and cannot be changed easliy. I got a request to change the log in to some kind of windows authentication. I'll explain. Our windows login uses active directory for users to log into their windows account. their login name is sXXXXXXX. X are numbers. in my webapp, I want to take the users numbers from their active directory login, and check if those exist in the webapp database. if it exists, they will automatically log in. If it doesn't, they will be referred to the regular login page for the webapp system which is currently in use. I tried changing my IIS to disable anonymous login and enabling windows authentication, therefore making the user browser to send it's current logged in user name to my webapp. I changed the web config as well from "Forms" to "Windows", which made my whole webapp obsolete as the whole forms system did not work. My question is this - is there a different way for the browser only to send the username to my webapp? I thought maybe javascript, I just don't know how to implement that, if it's even possible. I know it's not very secure, but all this platform and system is built outside the internet, it's on a private network.

    Read the article

  • PHP session_write_close() keeps sending a set-cookie header

    - by Chiraag Mundhe
    In my framework, I make a number of calls to session_write_close(). Let's assume that a session has been initiated with a user agent. The following code... foreach($i = 0; $i < 3; $i++) { session_start(); session_write_close(); } ...will send the following request header to the browser: Set-Cookie PHPSESSID=bv4d0n31vj2otb8mjtr59ln322; path=/ PHPSESSID=bv4d0n31vj2otb8mjtr59ln322; path=/ There should be no Set-Cookie header because, as I stipulated, the session cookie has already been created on the user's end. But every call to session_write_close() after the first one in the script above will result in PHP instructing the browser to set the current session again. This is not breaking my app or anything, but it is annoying. Does anyone have any insight into preventing PHP from re-setting the cookie with each subsequent call to session_write_close? EDIT The problem seems to be that with every subsequent call to session_start(), PHP re-sets the session cookie to its own SID and sends a Set-Cookie response header. But why??

    Read the article

  • Page Entirely Blank Despite Having Source Code! (TinyMCE, FireFox)

    - by Chris Cooper
    Alright guys, here's a tough one... In reference to this page. The page will seemingly randomly not display the output of server when using Firefox (version 3.5). I have not seen this problem occur in Safari or IE. The best way to have the problem occur is just reload the page about 10 times and it ought to have happened by then, and once it does - it'll continue on subsequent refreshes until you change the page. The problem is literally the browser not displaying the output (code). Viewing the source shows all the appropriate code yet the browser displays a blank white page. The web developer and firebug plugins don't show any errors that would indicate the problem. I have tested this on a separate system and OS and it occurs in Firefox on that system as well. The problem did not occur until after TinyMCE (A Rich Text Editor JavaScript library for textareas) was added to the project. TinyMCE works however, where it should. I know this is a confusing problem, but I am completely lost as to what could be causing this significant issue. Thanks in advance. EDIT: If it's any help... I've noticed that if I cause a css file error by changing a stylesheet source to something non-existant (xxx.css), the page will continuously display without a problem (besides whatever related css not showing due to the src change). I've also noticed that causing any simple javascript error with some bad code will cause the page to load properly continuously (besides of course javascript not running on the page). EDIT#2: Moving all <script> tags down at the tail of the <body> 'fixes' (well, hides) this error and the page shows normally. A band-aid.

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >