Search Results

Search found 22893 results on 916 pages for 'client scripting'.

Page 835/916 | < Previous Page | 831 832 833 834 835 836 837 838 839 840 841 842  | Next Page >

  • Syncing magento database froms development to production

    - by ringerce
    I use git for version control. I have a development, staging and production environment. When I finish in development I push to staging for review by the client. When approved, I push changes from staging to production. That works fine as long as there is no database changes. What happens if I install modules via Magento connect on local development and it makes database modifications. How would I push those changes up to the production server since the production server is always changing? Edit: I wrote two shell scripts. One that pulls the production database down to my development server, replaces base url with develpment url and updates my development db accordingly. It also leaves the production sql dump behind to be added to my git repo. I'm not really sure if it's beneficial to keep the raw dumps in source control but I'm going to try it out. The second scripts moves the development database up to staging and essentially performs the same operations as the first. Now when it comes time to move to production I pull the updated production repo into the production server and allow magento to do it's thing. I also started using SQLYog recently and it has a database comparison wizard which will give me the differences in my development and production databases and allow me to merge the changes in selectively. It always creates a migration script that I added to source control as well. If anything goes wrong I can run the comparison to see if anything was missed. Does this sounds like a decent workflow to you guys?

    Read the article

  • JqueryUI dialog box causes button to lose styling

    - by superexsl
    Hey everyone, I'm using JQueryUI dialog boxes and, while it works, it seems to remove any CSS styling on the button that opens the dialog. I have to manually refresh the page to get the styling back. An example button which launches the dialog: (button_submit is my CSS theme and launc_popup is used to detect the button click in JQuery. <asp:Button ID="btnLaunch" CssClass="button_sub launch_popup" runat="server" Text="Dialog..." CausesValidation="false" OnClientClick="return false;" /> My jquery: $('.launch_popup').click(function() { $("#dialog-form").dialog("open"); }); My dialog-form: $("#dialog-form").dialog({ autoOpen: false, height: 300, width: 300, modal: true, buttons: { 'Close': function() { $(this).dialog('close'); } } }); This happens to all the buttons that open dialogs. Is there a way to 'restyle' the button without refreshing the page? (It's in an updatepanel if that makes a difference, although this bit's client-side so not sure if that should affect it). Thanks for any help

    Read the article

  • confused about how to use JSON in C#

    - by Josh
    The answer to just about every single question about using C# with json seems to be "use JSON.NET" but that's not the answer I'm looking for. the reason I say that is, from everything I've been able to read in the documentation, JSON.NET is basically just a better performing version of the DataContractSerializer built into the .net framework... Which means if I want to deserialize a JSON string, I have to define the full, strongly-typed class for EVERY request I might have. so if I have a need to get categories, posts, authors, tags, etc, I have to define a new class for every one of these things. This is fine if I built the client and know exactly what the fields are, but I'm using someone else's api, so I have no idea what the contract is unless I download a sample response string and create the class manually from the JSON string. Is that the only way it's done? Is there not a way to have it create a kind of hashtable that can be read with json["propertyname"]? Finally, if I do have to build the classes myself, what happens when the API changes and they don't tell me (as twitter seems to be notorious for doing)? I'm guessing my entire project will break until I go in and update the object properties... So what exactly is the general workflow when working with JSON? And by general I mean library-agnostic. I want to know how it's done in general, not specifically to a target library... I hope that made sense, this has been a very confusing area to get into... thanks!

    Read the article

  • How to model a mutually exclusive relationship in sql server

    - by littlechris
    Hi, I have to add functionality to an existing application and I've run into a data situation that I'm not sure how to model. I am being restricted to the creation of new tables and code. If I need to alter the existing structure I think my client may reject the proposal..although if its the only way to get it right this is what I will have to do. I have an Item table that can me link to any number of tables, and these tables may increase over time. The Item can only me linked to one other table, but the record in the other table may have many items linked to it. Examples of the tables/entities being linked to are "Person", "Vehicle", "Building", "Office". These are all separate tables. Example of Items are "Pen", "Stapler", "Cushion", "Tyre", "A4 Paper", "Plastic Bag", "Poster", "Decoration" For instance a "Poster" may be allocated to a "Person" or "Office" or "Building". In the future if they add a "Conference Room" table it may also be added to that. My intital thoughts are: Item { ID, Name } LinkedItem { ItemID, LinkedToTableName, LinkedToID } The LinkedToTableName field will then allow me to identify the correct table to link to in my code. I'm not overly happy with this solution, but I can't quite think of anything else. Please help! :) Thanks!

    Read the article

  • FreeTDS runs out of memory from DBD::Sybase

    - by skiphoppy
    When I add client charset = UTF-8 to my freetds.conf file, my DBD::Sybase program emits: Out of memory! and terminates. This happens when I call execute() on an SQL query statement that returns any ntext fields. I can return numeric data, datetimes, and nvarchars just fine, but whenever one of the output fields is ntext, I get this error. All these queries work perfectly fine without the UTF-8 setting, but I do need to handle some characters that throw warnings under the default character set. (See related question.) The error message is not formatted the same way other DBD::Sybase error messages seem to be formatted. I do get a message that a rollback() is being issued, though. (My false AutoCommit flag is being honored.) I think I read somewhere that FreeTDS uses the iconv program to convert between character sets; is it possible that this message is being emitted from iconv? If I execute the same query with the same freetds.conf settings in tsql (FreeTDS's command-line SQL shell), I don't get the error. I'm connecting to SQL Server. What do I need to do to get these queries to return successfully?

    Read the article

  • MS Word opens documents hosted on WebDav share read-only on Windows Vista and 7 but only if no other

    - by rjmunro
    We have a WebDav server with some Word documents on it. (We are using PHP's HTTP_WebDAV_Server but get the same issue on tests with Apache mod_dav - both use digest authentication, basic auth doesn't work on Vista or later) We have a web page that opens the word documents using javascript like: Doc = new ActiveXObject("Sharepoint.OpenDocuments.3"); Doc.EditDocument(url, 'Word.Document'); which causes word to connect to the webdav server and open the document, bypassing IE and most of windows built in WebDav client. On Windows XP, this works perfectly, and (after prompting you to log in) allows you to edit the word document and save it back to the server. On Windows 7 and Windows Vista, this usually opens the document read only, but not in all cases. After quite a bit of trial and error, we found that it worked (i.e. opened read/write) if Explorer happened to be already connected to a WebDav server. Note that this works with any Webdav server, not neccesarily the one with the document that you are trying to edit. So other than telling our users to change settings on their machine, is there anything we can do in the javascript sharepoint call, or on the WebDav server that will fix this issue. Ps. We have the same problem when launching Word from an HTA file version of our system, with javascript like: wordApp = new ActiveXObject("Word.application"); wordApp = new ActiveXObject("Word.application"); wordApp.Visible = true; doc = wordApp.Documents.Open(url); Pps. Sorry if you think this question should be on Serverfault (or even SuperUser). I couldn't decide, but because we are programming the WebDav server ourself (in PHP) and I have more rep on this site than the others, I decided to post it here :-)

    Read the article

  • Is there a better tool than postcat for viewing postfix mail queue files?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick Perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • How to deal with clients and iterations in Agile team?

    - by Ondrej Slinták
    This thread is a follow up to my previous one. It's in fact 2 questions, so I hope no one minds, as they are dependent on each other. We are starting a new project at work and we consider it as a great opportunity to try Agile techniques in action. We had a brainstorming about ideas we read in several books and articles, and came up with concept that would suit us the best: 2 weeks iteration, followed by call with clients who would choose what stuff they want to have in next iteration. I just have few more questions, which we couldn't figure out ourselves. What to do in the first iteration? What to, generally, do in the first few iterations if we start from the scratch? Just give it a month of development to code core of the application or start with simple wire-frames with limited pre-coded functionality? What usually clients want to see? Shiny stuff that doesn't work or ugly stuff that does work? How to communicate with clients? Our initial thought it to set the process to something like this: Is it a good idea to have a Focal Point on client side or is it better to communicate straight with all the clients to prevent miscommunication? Any thoughts are welcome! Thanks in advance.

    Read the article

  • GZipStream in an WebHttpResponse producing no data

    - by Pierre 303
    I want to compress my HTTP Responses for client that supports it. Here is the code used to send a standard response: IHttpClientContext context = (IHttpClientContext)sender; IHttpRequest request = e.Request; string responseBody = "This is some random text"; IHttpResponse response = request.CreateResponse(context); using (StreamWriter writer = new StreamWriter(response.Body)) { writer.WriteLine(responseBody); writer.Flush(); response.Send(); } The code above works fine. Now I added gzip support below. When I test it with a browser that supports gzip or a custom method, it returns an empty string. I'm sure I'm missing something simple, but I just can't find it... IHttpClientContext context = (IHttpClientContext)sender; IHttpRequest request = e.Request; string acceptEncoding = request.Headers["Accept-Encoding"]; string responseBody = "This is some random text"; IHttpResponse response = request.CreateResponse(context); if (acceptEncoding != null && acceptEncoding.Contains("gzip")) { byte[] bytes = ASCIIEncoding.ASCII.GetBytes(responseBody); response.AddHeader("Content-Encoding", "gzip"); using (GZipStream writer = new GZipStream(response.Body, CompressionMode.Compress)) { writer.Write(bytes, 0, bytes.Length); writer.Flush(); response.Send(); } } else { using (StreamWriter writer = new StreamWriter(response.Body)) { writer.WriteLine(responseBody); writer.Flush(); response.Send(); } }

    Read the article

  • How do i add a new object with suds?

    - by Jerome
    I'm trying to use suds but have so far been unsuccessful at figuring this out. Hopefully it's something simple that i'm missing. Any help would be highly appreciated. This is supposed to be the raw soap message that i need to achieve: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:api="http://api.service.apimember.soapservice.com/"> <soapenv:Header/> <soapenv:Body> <api:insertOrUpdateMemberByObj> <token>t67GFCygjhkjyUy8y9hkjhlkjhuii</token> <member> <dynContent> <entry> <key>FIRSTNAME</key> <value>hhhhbbbbb</value> </entry> </dynContent> <email>[email protected]</email> </member> </api:insertOrUpdateMemberByObj> </soapenv:Body> </soapenv:Envelope> So i use suds to create the member object: member = client.factory.create('member') produces: (apiMember){ attributes = (attributes){ entry[] = <empty> } } How exactly do i append an 'entry'? I try this: member.attributes.entry.append({'key':'FIRSTNAME','value':'test'}) and that produces this: (apiMember){ attributes = (attributes){ entry[] = { value = "test" key = "FIRSTNAME" }, } } However, what i actually need is: (apiMember){ attributes = (attributes){ entry[] = (entry) { value = "test" key = "FIRSTNAME" }, } } How do i achieve this?

    Read the article

  • How to share the credentials between Webform and Winform?

    - by Daniel
    Hi! Question: I want to make a login form at the Clickonce deployment webpage, and only allow the authenticated users to download the application. and I want the downloaded application to use the same credentials entered at the webpage, without prompting the users to enter the credentials again. Details: I have an application(Windows Client) which needs customized settings for different users. the application is deployed through ClickOnce. Currently, the users are given the ClickOnce webpage URL, then download the application from there. after download and running the application, the application prompts users with a login form. If their credentials are authenticated, the application loads the customized settings from the server's database according to the credentials given. The problem is, any unauthenticated users can download the application if they just know the ClickOnce deployement webpage's URL. Unauthenticated users won't be able to run the application anyways, because the application asks for credentials when started, but I want to prevent the unauthenticated users from downloading the application at all. Am I asking the wrong question maybe? Your help is much appreciated!

    Read the article

  • C++ Program Flow: Sockets in an Object and the Main Function

    - by jfm429
    I have a rather tricky problem regarding C++ program flow using sockets. Basically what I have is this: a simple command-line socket server program that listens on a socket and accepts one connection at a time. When that connection is lost it opens up for further connections. That socket communication system is contained in a class. The class is fully capable of receiving the connections and mirroring the data received to the client. However, the class uses UNIX sockets, which are not object-oriented. My problem is that in my main() function, I have one line - the one that creates an instance of that object. The object then initializes and waits. But as soon as a connection is gained, the object's initialization function returns, and when that happens, the program quits. How do I somehow wait until this object is deleted before the program quits? Summary: main() creates instance of object Object listens Connection received Object's initialization function returns main() exits (!) What I want is for main() to somehow delay until that object is finished with what it's doing (aka it will delete itself) before it quits. Any thoughts?

    Read the article

  • Language Translation API

    - by kandarp
    How can i convert language in my Java? Is there any API exist, which convert any language to any other language? I am using Google Translate API, but it giving me below exception. java.lang.Exception: [google-api-translate-java] Error retrieving translation. at com.google.api.GoogleAPI.retrieveJSON(GoogleAPI.java:123) at com.google.api.translate.Translate.execute(Translate.java:69) at com.nextenders.client.beans.ruleengine.RuleEngineTest.main(RuleEngineTest.java:27) Caused by: java.net.ConnectException: Connection timed out: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at sun.net.NetworkClient.doConnect(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source) null at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(Unknown Source) at com.google.api.GoogleAPI.retrieveJSON(GoogleAPI.java:107) ... 2 more If anybody knows any API for translation, please tell me.

    Read the article

  • In .NET, how do I prevent, or handle, tampering with form data of disabled fields before submission?

    - by David
    Hi, If a disabled drop-down list is dynamically rendered to the page, it is still possible to use Firebug, or another tool, to tamper with the submitted value, and to remove the "disabled" HTML attribute. This code: protected override void OnLoad(EventArgs e) { var ddlTest = new DropDownList() {ID="ddlTest", Enabled = false}; ddlTest.Items.AddRange(new [] { new ListItem("Please select", ""), new ListItem("test 1", "1"), new ListItem("test 2", "2") }); Controls.Add(ddlTest); } results in this HTML being rendered: <select disabled="disabled" id="Properties_ddlTest" name="Properties$ddlTest"> <option value="" selected="selected">Please select</option> <option value="1">test 1</option> <option value="2">test 2</option> </select> The problem occurs when I use Firebug to remove the "disabled" attribute, and to change the selected option. On submission of the form, and re-creation of the field, the newly generated control has the correct value by the end of OnLoad, but by OnPreRender, it has assumed the identity of the submitted control and has been given the submitted form value. .NET seems to have no way of detecting the fact that the field was originally created in a disabled state and that the submitted value was faked. This is understandable, as there could be legitimate, client-side functionality that would allow the disabled attribute to be removed. Is there some way, other than a brute force approach, of detecting that this field's value should not have been changed? I see the brute force approach as being something crap, like saving the correct value somewhere while still in OnLoad, and restoring the value in the OnPreRender. As some fields have dependencies on others, that would be unacceptable to me.

    Read the article

  • Problem with jquery #find on partial postback

    - by anonymous
    I have a third party component. It is a calendar control. I have a clientside event on it which fires javascript to show a popup menu. I do everything client side so I can use MVC. dd function MouseDown(oDayView, oEvent, element) { try { e = oEvent.event; var rightClick = (e.button == 2); if (rightClick) { var menu = $find("2_menuSharedCalPopUp"); menu.showAt(200, 200, e); } } catch (err) { alert("MouseDown() err: " + err.description); } } The javascript fires perfectly withe $find intially. I have another clientside method which updates the calendar via a partial postback. Once I have done this all subsequent MouseDowns( rightclicks) which use the $find statment error with 'null'. All similar problems people have out there seem to be around calling javascript after a postback - with solutions being re-registering an event using PageRequestManager or registering a clientside function on the server - et cetera. However, the event is firing, and the javascript working - it's the reference in the DOM that seems an issue. Any ideas?

    Read the article

  • Error creating Google Calendar

    - by Don
    Hi, I'm trying to use the Google Calendar Java API to create a secondary calendar for a Google apps user. The relevant code is: // Initialise the API client CalendarService googleCalendar = new GCalendarService("canimo.ca"); googleCalendar.setUserCredentials("[email protected]", "secret"); // Create the calendar CalendarEntry cal = new CalendarEntry(); cal.title = new PlainTextConstruct(user.email); cal.summary = new PlainTextConstruct("Collection calendar"); cal.timeZone = new TimeZoneProperty("America/Montreal"); cal.hidden = HiddenProperty.FALSE; googleCalendar.insert(CALENDAR_URL, calendar); The call to insert() above results in com.google.gdata.util.ServiceException: Internal Server Error and no calendar is created. If I try and perform the same operation through the Google calendar website, it also fails with the error message: We could not save changes. Please try again in a few minutes. An obvious conclusion is that there's some problem on Google's side, but this has been going on for several days now. Strangely, if I create a new user, everything works fine for a while. I can create calendars via the website, and download them via the API. But as soon as I try and create a new calendar using the API, I get the exception above, and thereafter can't create new calendars using either the website or the API. Thanks, Don

    Read the article

  • extension methods with generics - when does caller need to include type parameters?

    - by Greg
    Hi, Is there a rule for knowing when one has to pass the generic type parameters in the client code when calling an extension method? So for example in the Program class why can I (a) not pass type parameters for top.AddNode(node), but where as later for the (b) top.AddRelationship line I have to pass them? class Program { static void Main(string[] args) { // Create Graph var top = new TopologyImp<string>(); // Add Node var node = new StringNode(); node.Name = "asdf"; var node2 = new StringNode(); node2.Name = "test child"; top.AddNode(node); top.AddNode(node2); top.AddRelationship<string, RelationshipsImp>(node,node2); // *** HERE *** } } public static class TopologyExtns { public static void AddNode<T>(this ITopology<T> topIf, INode<T> node) { topIf.Nodes.Add(node.Key, node); } public static INode<T> FindNode<T>(this ITopology<T> topIf, T searchKey) { return topIf.Nodes[searchKey]; } public static void AddRelationship<T,R>(this ITopology<T> topIf, INode<T> parentNode, INode<T> childNode) where R : IRelationship<T>, new() { var rel = new R(); rel.Child = childNode; rel.Parent = parentNode; } } public class TopologyImp<T> : ITopology<T> { public Dictionary<T, INode<T>> Nodes { get; set; } public TopologyImp() { Nodes = new Dictionary<T, INode<T>>(); } }

    Read the article

  • How to use data receive event in Socket class?

    - by affan
    I have wrote a simple client that use TcpClient in dotnet to communicate. In order to wait for data messages from server i use a Read() thread that use blocking Read() call on socket. When i receive something i have to generate various events. These event occur in the worker thread and thus you cannot update a UI from it directly. Invoke() can be use but for end developer its difficult as my SDK would be use by users who may not use UI at all or use Presentation Framework. Presentation framework have different way of handling this. Invoke() on our test app as Microstation Addin take a lot of time at the moment. Microstation is single threaded application and call invoke on its thread is not good as it is always busy doing drawing and other stuff message take too long to process. I want my events to generate in same thread as UI so user donot have to go through the Dispatcher or Invoke. Now i want to know how can i be notified by socket when data arrive? Is there a build in callback for that. I like winsock style receive event without use of separate read thread. I also do not want to use window timer to for polling for data. I found IOControlCode.AsyncIO flag in IOControl() function which help says Enable notification for when data is waiting to be received. This value is equal to the Winsock 2 FIOASYNC constant. I could not found any example on how to use it to get notification. If i am write in MFC/Winsock we have to create a window of size(0,0) which was just used for listening for the data receive event or other socket events. But i don't know how to do that in dotnet application.

    Read the article

  • Remotely connecting two non-local computers with sockets

    - by Velizar Hristov
    This question seems like something very obvious to ask, and yet I spent more than an hour trying to find an answer. First I host and wait for someone to connect. Then, from another instance of the application, I try to connect with a socket - for the constructor, I use InetAddress, port. The port is always right, and everything works if I use "localhost" for the address. However, if I type my IP, I get an IOException. I even sent the application to someone else, gave him my IP, and it didn't work. The aim of the application is to connect two computers. It's in Java. Here is the relevant code. Server: ServerSocket serverSocket = new ServerSocket(port); Socket clientSocket = serverSocket.accept(); Client: InetAddress a = InetAddress.getByName(ip); Socket s = new Socket(a, port); I don't get past that. Obviously, the values of int port and String ip are taken from text fields. Edit: the purpose of my application is to connect two non-local computers.

    Read the article

  • jquery change event not working with IE6

    - by manivineet
    It is indeed quite unfortunate that my client still uses IE6. using jquery 1.4.2 The problem is that I open a window using a click event and do some edit operation in the new window. I have a 'change' event attached to the row of a table which has input fields. Now when the window loads for the first time and I make a change in the input for the FIRST time, the change event does not fire. however, on a second try it starts working. I have noticed that I e.g. I run a dummy page, i.e. create a new page(i work with visual studio) and run that page individually , the 'change' event works just fine. what it going on? and what can i do, besides going back to 1.3.2 (by the way that doesn't work either, but haven't fully tested it yet) <!--HTML--> <table id="tbReadData"> <tr class="nenDataRow" id="nenDr2"> <td> <input type="text" class="nenMeterRegister" value="1234" /> </td> <tr /> <table> <script type="text/javascript"> $(document).ready(function(){ $('#tbReadData').find('tr').change(function() { alert('this works'); } }); </script>

    Read the article

  • How to deserialize implementation classes in OSGi

    - by Daniel Schneller
    In an eRCP OSGi based application the user can push a button and go to a lock screen similar to that of Windows or Mac OS X. When this happens, the current state of the application is serialized to a file and control is handed over to the lock screen. In this mobile application memory is very tight, so we need to get rid of the original view/controller when the lock screen comes up. This works fine and we end up with a binary serialized file. Once the user logs back in, the file is read in again and the original state of the application restored. This works fine as well, except when the controller that was serialized contained a reference to an object which comes from a different bundle. In my concrete case the original controller (from bundle A) can call a web service and gets a result back. Nothing fancy, just some Strings and Numbers in a simple value holder class. However the controller only sees this as a Result interface; the actual runtime object (ResultImpl) is defined and created in a different bundle (bundle B, the webservice client implementation) and returned via a service call. When the deserialization now tries to thaw the controller from the file, it throws a ClassNotFound exception, complaining about not being able to deserialize the result object, because deserialization is called from bundle A, which cannot see the ResultImpl class from bundle B. Any ideas on how to work around that? The only thing I could come up with is to clone all the individual values into another object, defined in the controller's bundle, but this seems like quite a hassle.

    Read the article

  • Multiple websites, Single sign-on design

    - by Yannis
    Hi all, I have a question. A client I have been doing some work recently has a range of websites with different login mechanisms. He is looking to slowly migrate to a single sign-on mechanism for his websites (all written in asp.net mvc). I am looking at my options here, so here is a list of requirements: 1) It has to be secure (duh) 2) It needs to support extra user properties over and above the usual name, address stuff (such as money or credits for a user) 3) It has to provide a centralized user management web console for his convenience (I understand that this will be a small project on top of whatever design solution I choose to go for) 4) It has to integrate with the existing websites without re-engineering the whole product (I understand that this depends on the current product implementation). 5) It has to deal with emailing the user when he registers (in order for him to activate his account) 6) It has to deal with activating the user when he clicks the activate me link in the email (I understand that 5 and 6 require some form of email templating system to support different emails per application) I was thinking of creating a library working together with forms authentication that exposes whatever methods are required (e.g. login, logout, activate, etc. and a small restful service to implement activation from email, registration processing etc. Taking into account that loads of things have been left out to make this question brief and to the point, does this sound like a good design? But this looks like a very common problem so arent there any existing projects that I could use? Thanks for reading.

    Read the article

  • ~1 second TcpListener Pending()/AcceptTcpClient() lag

    - by cpf
    Probably just watch this video: http://screencast.com/t/OWE1OWVkO As you see, the delay between a connection being initiated (via telnet or firefox) and my program first getting word of it. Here's the code that waits for the connection public IDLServer(System.Net.IPAddress addr,int port) { Listener = new TcpListener(addr, port); Listener.Server.NoDelay = true;//I added this just for testing, it has no impact Listener.Start(); ConnectionThread = new Thread(ConnectionListener); ConnectionThread.Start(); } private void ConnectionListener() { while (Running) { while (Listener.Pending() == false) { System.Threading.Thread.Sleep(1); }//this is the part with the lag Console.WriteLine("Client available");//from this point on everything runs perfectly fast TcpClient cl = Listener.AcceptTcpClient(); Thread proct = new Thread(new ParameterizedThreadStart(InstanceHandler)); proct.Start(cl); } } (I was having some trouble getting the code into a code block) I've tried a couple different things, could it be I'm using TcpClient/Listener instead of a raw Socket object? It's not a mandatory TCP overhead I know, and I've tried running everything in the same thread, etc.

    Read the article

  • how to implement enhanced session handling in PHP

    - by praksant
    Hi, i'm working with sessions in PHP, and i have different applications on single domain. Problem is, that cookies are domain specific, and so session ids are sent to any page on single domain. (i don't know if there is a way to make cookies work in different way). So Session variables are visible in every page on this domain. I'm trying to implement custom session manager to overcome this behavior, but i'm not sure if i'm thinking about it right. I want to completely avoid PHP session system, and make a global object, which would store session data and on the end of script save it to database. On first access i would generate unique session_id and create a cookie On the end of script save session data with session_id, timestamps for start of session and last access, and data from $_SERVER, such as REMOTE_ADDR, REMOTE_PORT, HTTP_USER_AGENT. On every access chceck database for session_id sent in cookie from client, check IP, Port and user agent (for security) and read data into session variable (if not expired). If session_id expired, delete from database. That session variable would be implemented as singleton (i know i would get tight coupling with this class, but i don't know about better solution). I'm trying to get following benefits: Session variables invisible in another scripts on the same server and same domain Custom management of session expiration Way to see open sessions (something like list of online users) i'm not sure if i'm overlooking any disadvantages of this solution. Is there any better way? Thank you!!

    Read the article

  • Any danger in calling flash messages html_safe?

    - by PreciousBodilyFluids
    I want a flash message that looks something like: "That confirmation link is invalid or expired. Click here to have a new one generated." Where "click here" is of course a link to another action in the app where a new confirmation link can be generated. Two drawbacks: One, since link_to isn't defined in the controller where the flash message is being set, I have to put the link html in myself. No big deal, but kind of messy. Number two: In order for the link to actually display properly on the page I have to html_safe the flash display function in the view, so now it looks like (using Haml): - flash.each do |name, message| = content_tag :div, message.html_safe This gives me pause. Everything else I html_safe has been HTML I've written myself in helpers and whatnot, but the contents of the flash hash are stored in a cookie client-side, and could conceivably be changed. I've thought through it, and I don't see how this could result in an XSS attack, but XSS isn't something I have a great understanding of anyway. So, two questions: 1. Is there any danger in always html_safe-ing all flash contents like this? 2. The fact that this solution is so messy (breaking MVC by using HTML in the controller, always html_safe-ing all flash contents) make me think I'm going about this wrong. Is there a more elegant, Rails-ish way to do this? I'm using Rails 3.0.0.beta3.

    Read the article

< Previous Page | 831 832 833 834 835 836 837 838 839 840 841 842  | Next Page >