Search Results

Search found 10695 results on 428 pages for 'none none'.

Page 324/428 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • REST API error return good practices

    - by Remus Rusanu
    I'm looking for guidance on good practices when it comes to return errors from a REST API. I'm working on a new API so I can take it any direction right now. My content type is XML at the moment, but I plan to support JSON in future. I am now adding some error cases, like for instance a client attempts to add a new resource but has exceeded his storage quota. I am already handling certain error cases with HTTP status codes (401 for authentication, 403 for authorization and 404 for plain bad request URIs). I looked over the blessed HTTP error codes but none of the 400-417 range seems right to report application specific errors. So at first I was tempted to return my application error with 200 OK and a specific XML payload (ie. Pay us more and you'll get the storage you need!) but I stopped to think about it and it seems to soapy (/shrug in horror). Besides it feels like I'm splitting the error responses into distinct cases, as some are http status code driven and other are content driven. So what is the SO crowd recommendation? Good practices (please explain why!) and also, from a client pov, what kind of error handling in the REST API makes life easier for the client code?

    Read the article

  • SqlLite/Fluent NHibernate integration test harness initialization not repeatable after large data se

    - by Mark Rogers
    In one of my main data integration test harnesses I create and use Fluent NHibernate's SingleConnectionSessionSourceForSQLiteInMemoryTesting, to get a fresh session for each test. After each test, I close the connection, session, and session factory, and throw out the nested StructureMap container they came from. This works for almost any simple data integration test I can think, including ones that utilize Fluent NHib's PersistenceSpecification object. When I test the application's lengthy database bootstrapping process, which creates and saves thousands of domain objects, I start seeing issues. It's not that the setup and tear down fails, in fact, the test successfully bootstraps the in-memory database as the application would bootstrap the real database in the production environment. The problem occurs when the database bootstrapping occurs a second time on a new in-memory database, with a new session and session factory. The error is: NHibernate.StaleStateException : Unexpected row count: 0; expected: 1 The row count is indeed Unexpected, the row that the application under test is looking for should be in the session. You see, it's not that any data from the last integration test is sticking around, it's that for some reason the session just stops working mid-database-boostrap. And I've looked everywhere for a place I might be holding on to an old session and I can't find one. I've searched through the code for static singleton objects, but there are none anywhere near the code in question. I have a couple StructureMap InstanceScope singleton's but they are getting thrown out with each nested container that is lost after every test teardown. I've tried every possible variation on disposing and closing every object involved with each test teardown and it still fails on this lengthy database bootstrap. But non-bootstrap related database tests appear to work fine. I'm starting to run out of options and may have to surrender lengthy database integration tests in favor of WatiN-based acceptance tests. Can anyone give me any clue about how I can figure out why some of my SingleConnectionSessionSourceForSQLiteInMemoryTesting aren't repeatable? Any advice at all, about how to make an NHibernate SqlLite database integration test harness repeatable?

    Read the article

  • Changing CSS height property removes anchors?

    - by JMan
    I am working on a standard header/navigation for my website. I started with having a CSS "height" value of 100% for the html, body elements but this resulted in the wrong layout. However, when I change the CSS height property from "100%" to "auto", the layout is correct, but I lose the anchors in the navigation. Why is this? Here's my CSS: html,body { height: auto; /* This is the only property that I'm toggling */ margin: 0; } body { margin: 0; min-width: 978px; font: 12px/16px Arial, Helvetica, sans-serif; color: #000; background: #000001 url('../images/bg-body.jpg') no-repeat 50% 0; } #nav { position: relative; float: left; margin: 0; padding: 0 2px 0 271px; list-style: none; background: url('../images/sep-nav.gif') no-repeat 100% 0; } #nav li { float: left; padding: 11px 0 0 2px; height: 35px; width: 128px; line-height: 22px; font-size: 18px; text-align: center; background: url('../images/sep-nav.gif') no-repeat 0 -1px; display: inline; } #nav li a {color: #FFFEFE;} ..... I compared the computed CSS in Firebug when the html, body height property was set to "auto" vs. "100%" and they were the same. Can somebody please shed some light on how retain the anchors in the navigation while setting height to "auto"? Thanks for your help.

    Read the article

  • How to run R through PHP with exec?

    - by dkar
    I am going to ask something, that I know that it has been asked already some times. But since, all of the past posts are quite old and none of them answer my problem..I try again. I am completely new in R language and relative new in php. What I want to do is to use the exec() function from php in order to execute a R script. Most of the people here will start talking about rapache, rserve and I don't know what else..but since I am not familiar with all these technologies, I prefer just using exec. The code I will show here is working just fine when I run it with Rscript from the terminal. # R script png("temp.png") plot(5,5) dev.off() But when I try to run it either with Rscript or with R CMD BATCH from PHP, like this: echo exec("Rscript my_rscript.R"); //OR //echo exec("R CMD BATCH my_rscript.R"); I get nothing back. I have checked if exec() function is available and if it works. Everything is ok with this. I read also, that I might have to change the permissions of the webserver...but I don't know how to do this in mamp. I hope I am clear with my problem and someone can help. Thanks Dimitris

    Read the article

  • How do I read binary C++ protobuf data using Python protobuf?

    - by nbolton
    The Python version of Google protobuf gives us only: SerializeAsString() Where as the C++ version gives us both: SerializeToArray(...) SerializeAsString() We're writing to our C++ file in binary format, and we'd like to keep it this way. That said, is there a way of reading the binary data into Python and parsing it as if it were a string? Is this the correct way of doing it? binary = get_binary_data() binary_size = get_binary_size() string = None for i in range(len(binary_size)): string += i message = new MyMessage() message.ParseFromString(string) Update: Here's a new example, and a problem: message_length = 512 file = open('foobars.bin', 'rb') eof = False while not eof: data = file.read(message_length) eof = not data if not eof: foo_bar = FooBar() foo_bar.ParseFromString(data) When we get to the foo_bar.ParseFromString(data) line, I get this error: Exception Type: DecodeError Exception Value: Too many bytes when decoding varint. Update 2: It turns out, that the padding on the binary data was throwing protobuf off; too many bytes were being sent in, as the message suggests (in this case it was referring to the padding). This padding comes from using the C++ protobuf function, SerializeToArray on a fixed-length buffer. To eliminate this, I have used this temproary code: message_length = 512 file = open('foobars.bin', 'rb') eof = False while not eof: data = file.read(message_length) eof = not data string = '' for i in range(0, len(data)): byte = data[i] if byte != '\xcc': # yuck! string += data[i] if not eof: foo_bar = FooBar() foo_bar.ParseFromString(string) There is a design flaw here I think. I will re-implement my C++ code so that it writes variable length arrays to the binary file. As advised by the protobuf documentation, I will prefix each message with it's binary size so that I know how much to read when I'm opening the file with Python.

    Read the article

  • jQuery show based on checkbox value at page load.

    - by Jacob Huggart
    I have an ASP MVC web app and on one of the pages there is a set of Main checkboxes with sub-checkboxes underneath them. The sub-checkboxes should only show up when the corresponding main checkbox is checked. I have the following code that works just fine as long as none of the checkboxes are checked when the page loads. $("input[id$=Suffix]").change(function() { prefix = this.id; if (!$(this).hasClass("checked")) { $("tr[id^=" + prefix + "]").show(); $(this).addClass("checked"); } else { $("tr[id^=" + prefix + "]").hide(); $(this).removeClass("checked"); } }); Now I need to check a database for the values of the main checkboxes. I get the values, and can check the boxes on page load. But when the page comes up, the sub-checkboxes are not displayed when the main checkbox is checked. Also, if the main checkbox is checked when the page loads, the sub-checkboxes are only displayed when the main chcekbox is unchecked (obviously because the above function only acts on .change()). What do you all suggest I try? If you need further explanation feel free to ask. edit: btw, all of this is in $(document).ready()

    Read the article

  • Python: OSX Library for fast full screen jpg/png display

    - by Parand
    Frustrated by lack of a simple ACDSee equivalent for OS X, I'm looking to hack one up for myself. I'm looking for a gui library that accommodates: Full screen image display High quality image fit-to-screen (for display) Low memory usage Fast display Reasonable learning curve (the simpler the better) Looks like there are several choices, so which is the best? Here are some I've run across: PyOpenGL PyGame PyQT wxpython I don't have any particular experience with any of these, nor any strong desire to become an expert - I'm looking for the simplest solution. What do you recommend? [Update] For those not familiar with ACDSee, here's what it does that I care about: Simple list/thubmnail display of images in a directory Sort by name/size/type Ability to view images full screen Single-key delete while viewing full screen Move to next/previous image while viewing full screen Ability to select a group of images for: move to / copy to directory delete resize ACDSee has a bunch of niceties as well, such as remembering directories you've moved images to in the past, remembering your resize settings, displaying the total size of the images you've selected, etc. I've tried most of the options I could find (including Xee) and none of them quite get there. Please keep in mind that this is a programming/library question, not a criticism of any of the existing tools.

    Read the article

  • validation control unable to find its control to validate

    - by nat
    i have a repeater that is bound to a number of custom dataitems/types on the itemdatabound event for the repeater the code calls a renderedit function that depending on the custom datatype will render a custom control. it will also (if validation flag is set) render a validation control for the appropriate rendered edit control the edit control overrides the CreateChildControls() method for the custom control adding a number of literalControls thus protected override void CreateChildControls() { //other bits removed - but it is this 'hidden' control i am trying to validate this.Controls.Add(new LiteralControl(string.Format( "<input type=\"text\" name=\"{0}\" id=\"{0}\" value=\"{1}\" style=\"display:none;\" \">" , this.UniqueID , this.MediaId.ToString()) )); //some other bits removed } the validation control is rendered like this: where the passed in editcontrol is the control instance of which the above createchildcontrols is a method of.. public override Control RenderValidationControl(Control editControl) { Control ctrl = new PlaceHolder(); RequiredFieldValidator req = new RequiredFieldValidator(); req.ID = editControl.ClientID + "_validator"; req.ControlToValidate = editControl.UniqueID; req.Display = ValidatorDisplay.Dynamic; req.InitialValue = "0"; req.ErrorMessage = this.Caption + " cannot be blank"; ctrl.Controls.Add(req); return ctrl; } the problem is, altho the validation controls .ControlToValidate property is set to the uniqueid of the editcontrol. when i hit the page i get the following error: Unable to find control id 'FieldRepeater$ctl01$ctl00' referenced by the 'ControlToValidate' property of 'FieldRepeater_ctl01_ctl00_validator'. i have tried changing the literal in the createchildcontrols to a new TextBox(), and then set the id etc then, but i get a similar problem. can anyone enlighten me? is this because of the order the controls are rendered in? ie the validation control is written before the editcontrol? or... anyhow any help much appreciated thanks nat

    Read the article

  • Dojox Datagrid contains data, but shows up as empty

    - by Vivek
    I'd really appreciate any help on this. There is this Dojox Datagrid that I'm creating programatically and supplying JSON data. As of now, I'm creating this data within JavaScript itself. Please refer to the below code sample. var upgradeStageStructure =[{ cells:[ { field: "stage", name: "Stage", width: "50%", styles: 'text-align: left;' }, { field:"status", name: "Status", width: "50%", styles: 'text-align: left;' } ] }]; var upgradeStageData = [ {id:1, stage: "Preparation", status: "Complete"}, {id:2, stage: "Formatting", status: "Complete"}, {id:3, stage: "OS Installation", status: "Complete"}, {id:4, stage: "OS Post-Installation", status: "In Progress"}, {id:5, stage: "Application Installation", status: "Not Started"}, {id:6, stage: "Application Post-Installation", status: "Not Started"} ]; var stagestore = new dojo.data.ItemFileReadStore({data:{identifier:"id", items: upgradeStageData}}); var upgradeStatusGrid = new dojox.grid.DataGrid({ autoHeight: true, style: "width:400px;padding:0em;margin:0em;", store: stagestore, clientSort: false, rowSelector: '20px', structure: upgradeStageStructure, columnReordering: false, selectable: false, singleClickEdit: false, selectionMode: 'none', loadingMessage: 'Loading Upgrade Stages', noDataMessage:'There is no data', errorMessage: 'Failed to load Upgrade Status' }); dojo.byId('progressIndicator').innerHTML=''; dojo.byId('progressIndicator').appendChild(upgradeStatusGrid.domNode); upgradeStatusGrid.startup(); The problem is that I am not seeing anything within the grid upon display (no headers, no data). But I know for sure that the data in the grid does exist and the grid is properly initialized, because I called alert (grid.domNode.innerHTML);. The resultant HTML that is thrown up does show a table containing header rows and the above data. This link contains an image which illustrates what I'm seeing when I display the page. (Can't post images since my account is new here) As you may notice, there are 6 rows for 6 pieces of data I have created but the grid is a mess. Please help out if you think you know what could be going wrong. Thanks in advance, Viv

    Read the article

  • Spring 2.5 managed servlets: howto?

    - by EugeneP
    Correct me if anything is wrong. As I understand, all Spring functionality, namely DI works when beans are got thru Spring Context, ie getBean() method. Otherwise, none can work, even if my method is marked @Transactional and I will create the owning class with a new operator, no transaction management will be provided. I use Tomcat 6 as a servlet container. So, my question is: how to make Servlet methods managed by Spring framework. The issue here is that I use a framework, and its servlets extend the functionality of basic java Servlets, so they have more methods. Still, web.xml is present in an app as usual. The thing is that I do not control the servlets creation flow, I can only override a few methods of each servlet, the flow is basically written down in some xml file, but I control this process using a graphical gui. So, basically, I only add some code to a few methods of each Servlet. How to make those methods managed by Spring framework? The basic thing I need to do is making these methods transactional (@Transactional).

    Read the article

  • IE6 PNG-transparency CSS hack not working

    - by John
    I looked around and decided to use a CSS approach rather than rely on JS... I figure the kind of corporate users stuck with IE6 might also have JS disabled by IT departments. So In my HTML I have: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8" /> <title>My Page</title> <link rel="stylesheet" type="text/css" href="default.css" /> <!--[if IE 6]><link rel="stylesheet" type="text/css" href="ie6.css"><![endif]--> </head> <body> <img src="media/logo.png"/> </body> Then my ie6.css consists simply of: img { filter: progid:DXImageTransform.Microsoft.AlphaImageLoader(...); } However none of this makes the slightest difference, no transparency. I commented out all the rest of the page so it is literally that one and still no luck. I removed the default.css stylesheet and still no difference.

    Read the article

  • NTLM Authentication fails when behind Proxy server

    - by Jan Petersen
    Hi All, I've seen a number of post about consuming Web Services from behind a proxy server, but none that seams to address this problem. I'm building a desktop application, using Java, JAX-WS in NetBeans. I have a working prototype, that can query the server for authentication mode, successfully authenticate and retrieve a list of web site. However, if I run the same app from a network that is behind a proxy server (the proxy does not require authentication), then I'm running into trouble. I have sniffed the traffic, and noticed the following: Behind Proxy # Result Protocol Host URL 1 200 HTTP host.domain.com /_vti_bin/Authentication.asmx 2 401 HTTP host.domain.com /_vti_bin/Webs.asmx 3 401 HTTP host.domain.com /_vti_bin/Webs.asmx 4 401 HTTP host.domain.com /_vti_bin/Webs.asmx 5 401 HTTP host.domain.com /_vti_bin/Webs.asmx Without Proxy # Result Protocol Host URL 1 200 HTTP host.domain.com /_vti_bin/Authentication.asmx 2 401 HTTP host.domain.com /_vti_bin/Webs.asmx 3 401 HTTP host.domain.com /_vti_bin/Webs.asmx 4 401 HTTP host.domain.com /_vti_bin/Webs.asmx 5 401 HTTP host.domain.com /_vti_bin/Webs.asmx 6 200 HTTP host.domain.com /_vti_bin/Webs.asmx When running the code from a network without a proxy server, I successfully Authentication with the server, but when I'm behind the proxy server, the traffic is cut-off at the 5th message, and thus don't succeed. I know from the Java docs that On Microsoft Windows platforms, NTLM authentication attempts to acquire the user credentials from the system without prompting the user's authenticator object. If these credentials are not accepted by the server then the user's authenticator will be called. Given that my Authentication code is called only ones, and only as the 5th attempt, it appears as if the connection is dropped when behind the proxy server before my Authentication object is used. Is there any way I can control the behavior of Authentication module, to not have it use the system credentials? I have put the source text java class files of a demo app up, showing the issue at the following urls (it's a bit to long even in the short demo form to post here). link text Br Jan

    Read the article

  • Measuring execution time of selected loops

    - by user95281
    I want to measure the running times of selected loops in a C program so as to see what percentage of the total time for executing the program (on linux) is spent in these loops. I should be able to specify the loops for which the performance should be measured. I have tried out several tools (vtune, hpctoolkit, oprofile) in the last few days and none of them seem to do this. They all find the performance bottlenecks and just show the time for those. Thats because these tools only store the time taken that is above a threshold (~1ms). So if one loop takes lesser time than that then its execution time won't be reported. The basic block counting feature of gprof depends on a feature in older compilers thats not supported now. I could manually write a simple timer using gettimeofday or something like that but for some cases it won't give accurate results. For ex: for (i = 0; i < 1000; ++i) { for (j = 0; j < N; ++j) { //do some work here } } Now here I want to measure the total time spent in the inner loop and I will have to put a call to gettimeofday inside the first loop. So gettimeofday itself will get called a 1000 times which introduces its own overhead and the result will be inaccurate.

    Read the article

  • A better python property decorator

    - by leChuck
    I've inherited some python code that contains a rather cryptic decorator. This decorator sets properties in classes all over the project. The problem is that this I have traced my debugging problems to this decorator. Seems it "fubars" all debuggers I've tried and trying to speed up the code with psyco breaks everthing. (Seems psyco and this decorator dont play nice). I think it would be best to change it. def Property(function): """Allow readable properties""" keys = 'fget', 'fset', 'fdel' func_locals = {'doc':function.__doc__} def probeFunc(frame, event, arg): if event == 'return': locals = frame.f_locals func_locals.update(dict((k,locals.get(k)) for k in keys)) sys.settrace(None) return probeFunc sys.settrace(probeFunc) function() return property(**func_locals) Used like so: class A(object): @Property def prop(): def fget(self): return self.__prop def fset(self, value): self.__prop = value ... ect The errors I get say the problems are because of sys.settrace. (Perhaps this is abuse of settrace ?) My question: Is the same decorator achievable without sys.settrace. If not I'm in for some heavy rewrites.

    Read the article

  • How to stop ejabberd from using mnesia

    - by Eldad Mor
    I'm trying to establish a procedure for restoring my database from a crashed server to a new server. My server is running Ejabberd as an XMPP server, and I configured it to use postgresql instead of mnesia - or so I thought. My procedure goes something like "dump the contents of the original DB, run the new server, restore the contents of the DBs using psql, then run the system". However, when I try running Ejabberd again I get a crash: =CRASH REPORT==== 3-Dec-2010::22:05:00 === crasher: pid: <0.36.0> registered_name: [] exception exit: {bad_return,{{ejabberd_app,start,[normal,[]]}, {'EXIT',"Error reading Mnesia database"}}} in function application_master:init/4 Here I was thinking that my system is running on PostgreSQL, while it seems I was still using Mnesia. I have several questions: How can I make sure mnesia is not being used? How can I divert all the ejabberd activities to PGSQL? This is the modules part in my ejabberd.cfg file: {modules, [ {mod_adhoc, []}, {mod_announce, [{access, announce}]}, % requires mod_adhoc {mod_caps, []}, {mod_configure,[]}, % requires mod_adhoc {mod_ctlextra, []}, {mod_disco, []}, {mod_irc, []}, {mod_last_odbc, []}, {mod_muc, [ {access, muc}, {access_create, muc}, {access_persistent, muc}, {access_admin, muc_admin}, {max_users, 500} ]}, {mod_offline_odbc, []}, {mod_privacy_odbc, []}, {mod_private_odbc, []}, {mod_pubsub, [ % requires mod_caps {access_createnode, pubsub_createnode}, {plugins, ["default", "pep"]} ]}, {mod_register, [ {welcome_message, none}, {access, register} ]}, {mod_roster_odbc, []}, {mod_stats, []}, {mod_time, []}, {mod_vcard_odbc, []}, {mod_version, []} ]}. What am I missing? I am assuming the crash is due to the mnesia DB being used by Ejabberd, and since it's out of sync with the PGSQL DB, it cannot operate correctly - but maybe I'm totally off track here, and would love some direction. EDIT: One problem solved. Since I'm using amazon cloud, I needed to hardcode the ERLANG_NODE so it won't be defined by the hostname (which changes on reboot). This got my ejabberd running, but still I wish to stop using mnesia, and I wonder what part of ejabberd is still using it and how can I found it.

    Read the article

  • iphone reload tableView

    - by Brodie4598
    In the following code, I have determined that everything works, up until [tableView reloadData] i have NSLOGs set up in the table view delegate methods and none of them are getting called. I have other methods doing the same reloadData and it works perfectly. The only difference tha I am awayre of is tha this is in a @catch block. maybe you smart guys can see something i'm doing wrong... @catch (NSException * e) {////chart is user genrated logoView.image = nil; NSInteger row = [picker selectedRowInComponent:0]; NSString *selectedAircraft = [aircraft objectAtIndex:row]; NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *docsPath = [paths objectAtIndex:0]; NSString *checklistPath = [[NSString alloc]initWithFormat:@"%@/%@.checklist",docsPath,selectedAircraft]; NSString *dataString = [NSString stringWithContentsOfFile:checklistPath encoding: NSUTF8StringEncoding error:NULL]; if ([dataString hasPrefix:@"\n"]) { dataString = [dataString substringFromIndex:1]; } NSArray *tempArray = [dataString componentsSeparatedByString:@"\n"]; NSDictionary *temporaryDictionary = [NSDictionary dictionaryWithObject: tempArray forKey:@"User Generated Checklist"]; self.names = temporaryDictionary; NSArray *tempArray2 = [NSArray arrayWithObject:@"01User Generated Checklist"]; self.keys = tempArray2; aircraftLabel.text = selectedAircraft; checklistSelectPanel.hidden = YES; [tableView reloadData]; }

    Read the article

  • How to recalculate all-pairs shorthest paths on-line if nodes are getting removed?

    - by Pavel Shved
    Latest news about underground bombing made me curious about the following problem. Assume we have a weighted undirected graph, nodes of which are sometimes removed. The problem is to re-calculate shortest paths between all pairs of nodes fast after such removals. With a simple modification of Floyd-Warshall algorithm we can calculate shortest paths between all pairs. These paths may be stored in a table, where shortest[i][j] contains the index of the next node on the shortest path between i and j (or NULL value if there's no path). The algorithm requires O(n³) time to build the table, and eacho query shortest(i,j) takes O(1). Unfortunately, we should re-run this algorithm after each removal. The other alternative is to run graph search on each query. This way each removal takes zero time to update an auxiliary structure (because there's none), but each query takes O(E) time. What algorithm can be used to "balance" the query and update time for all-pairs shortest-paths problem when nodes of the graph are being removed?

    Read the article

  • what's an effective way to build a csproj file in C#?

    - by jcollum
    I'd like to avoid a command line for this. I've been using the MSBuild API ( Microsoft.Build.Framework and Microsoft.Build.BuildEngine) with code that looks like this: this.buildEngine = new Engine(); BuildPropertyGroup props = new BuildPropertyGroup(); props.SetProperty("Configuration", "Debug"); this.buildEngine.RegisterLogger(this.logger); Project proj = new Project(this.buildEngine); proj.LoadXml(this.projectFileAndPath, ProjectLoadSettings.None); this.buildEngine.BuildProject(proj, "Build"); However I've run into enough problems that I can't find answers for that I'm really wondering if I'm doing this right. First, I can't find the output (there's no bin directory in any of the places where I figured the dll's would end up). Second, I tried building a project that I had made in VS2008 and the line proj.LoadXml( fails for invalid xml encoding. But of course the xml file is valid, since VS2008 can build it (I checked). At this point I'm beginning to wonder if I've picked up some code that's way out of date or a methodology that's been superseded by something else. Opinions?

    Read the article

  • Understanding CSRF - Simple Question

    - by byronh
    I know this might make me seem like an idiot, I've read everything there is to read about CSRF and I still don't understand how using a 'challenge token' would add any sort of prevention. Please help me clarify the basic concept, none of the articles and posts here on SO I read seemed to really explicitly state what value you're comparing with what. From OWASP: In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is utilized for each subsequent request until the session expires. If I understand the process correctly, this is what happens. I log in at http://example.com and a session/cookie is created containing this random token. Then, every form includes a hidden input also containing this random value from the session which is compared with the session/cookie upon form submission. But what does that accomplish? Aren't you just taking session data, putting it in the page, and then comparing it with the exact same session data? Seems like circular reasoning. These articles keep talking about following the "same-origin policy" but that makes no sense, because all CSRF attacks ARE of the same origin as the user, just tricking the user into doing actions he/she didn't intend. Is there any alternative other than appending the token to every single URL as a query string? Seems very ugly and impractical, and makes bookmarking harder for the user.

    Read the article

  • RSA example that do not use NoPadding

    - by Tom Brito
    Where can I find a RSA encrypt example that does not use "NoPadding"? --update Better: how to make this code run correctly without throw the "too much data for RSA block" exception? import java.math.BigInteger; import java.security.KeyFactory; import java.security.interfaces.RSAPrivateKey; import java.security.interfaces.RSAPublicKey; import java.security.spec.RSAPrivateKeySpec; import java.security.spec.RSAPublicKeySpec; import javax.crypto.Cipher; /** * Basic RSA example. */ public class TestRSA { public static void main(String[] args) throws Exception { byte[] input = new byte[100]; Cipher cipher = Cipher.getInstance("RSA/None/NoPadding", "BC"); KeyFactory keyFactory = KeyFactory.getInstance("RSA", "BC"); // create the keys RSAPublicKeySpec pubKeySpec = new RSAPublicKeySpec(new BigInteger("d46f473a2d746537de2056ae3092c451", 16), new BigInteger("11", 16)); RSAPrivateKeySpec privKeySpec = new RSAPrivateKeySpec(new BigInteger( "d46f473a2d746537de2056ae3092c451", 16), new BigInteger("57791d5430d593164082036ad8b29fb1", 16)); RSAPublicKey pubKey = (RSAPublicKey) keyFactory.generatePublic(pubKeySpec); RSAPrivateKey privKey = (RSAPrivateKey) keyFactory.generatePrivate(privKeySpec); // encryption step cipher.init(Cipher.ENCRYPT_MODE, pubKey); byte[] cipherText = cipher.doFinal(input); // decryption step cipher.init(Cipher.DECRYPT_MODE, privKey); byte[] plainText = cipher.doFinal(cipherText); } }

    Read the article

  • CSS buttons and crossbrowser issue

    - by Jason
    I have a class 'button' that I want to use for button, input and a tags. The problem is that both button and input tags have a greater line height than the anchor tag does. I have attached an example here so you can play around with it in firebug or whatever. http://28dev.com/stackoverflow/css-buttons.html But for those who just want to see the css/html, here it is: .button { font-family : helvetica, arial, sans-serif; -moz-border-radius: 3px; -webkit-border-radius: 3px; background : url(/images/ui-bg_glass_75_e6e6e6_1x400.png) repeat-x scroll 50% 50% #E6E6E6; border-color : #636363 #888888 #888888 #636363; border-right : 1px solid #888888; border-style : solid; border-width : 1px; cursor : pointer; display : block; float : left; font-size : 12px; font-weight : normal; line-height : 100%; min-width : 100px; padding : 3px 12px; text-align : center; } .button, .button a { color : #282828; text-decoration : none; } .button:hover, .button:hover a { border-color : #343434; color : #080808; } .marginR { margin-right : 5px; } <button class='button marginR' type='submit'>button.button</button> <input type="submit" value="input.button" class="button marginR" /> <a class="button" href="">a.button</a>

    Read the article

  • How do you localize/internationalize an MVC Controller when using a SQL based localization provider?

    - by EBarr
    Hopefully this isn't too silly of a question. In MVC there appears to be plenty of localization support in the views. Once I get to the controller, however, it becomes murky. Using meta:resourcekey="blah" is out, same with <%$ Resources:PageTitle.Text%. ASP.NET MVC - Localization Helpers -- suggested extensions for the Html helper classes like Resource(this Controller controller, string expression, params object[] args). Similarly, Localize your MVC with ease suggested a slightly different extension like Localize(this System.Web.UI.UserControl control, string resourceKey, params object[] args) None of these approaches works while in a controller. I put together the below function and I'm using the controllers full class name as my VirtualPath. But I'm new to MVC and assume there's a better way. public static string Localize (System.Type theType, string resourceKey, params object[] args) { string resource = (HttpContext.GetLocalResourceObject(theType.FullName, resourceKey) ?? string.Empty).ToString(); return mergeTokens(resource, args); } Thoughts? Comments?

    Read the article

  • Find all cycles in graph, redux

    - by Shadow
    Hi, I know there are a quite some answers existing on this question. However, I found none of them really bringing it to the point. Some argue that a cycle is (almost) the same as a strongly connected components (s. http://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402) , so one could use algorithms designed for that goal. Some argue that finding a cycle can be done via DFS and checking for back-edges (s. boost graph documentation on file dependencies). I now would like to have some suggestions on whether all cycles in a graph can be detected via DFS and checking for back-edges? My opinion is that it indeed could work that way as DFS-VISIT (s. pseudocode of DFS) freshly enters each node that was not yet visited. In that sense, each vertex exhibits a potential start of a cycle. Additionally, as DFS visits each edge once, each edge leading to the starting point of a cycle is also covered. Thus, by using DFS and back-edge checking it should indeed be possible to detect all cycles in a graph. Note that, if cycles with different numbers of participant nodes exist (e.g. triangles, rectangles etc.), additional work has to be done to discriminate the acutal "shape" of each cycle.

    Read the article

  • .NET Sockets Buffer Overflow No Error

    - by Michael Covelli
    I have one thread that is receiving data over a socket like this: while (sock.Connected) { // Receive Data (Block if no data) recvn = sock.Receive(recvb, 0, rlen, SocketFlags.None, out serr); if (recvn <= 0 || sock == null || !sock.Connected) { OnError("Error In Receive, recvn <= 0 || sock == null || !sock.Connected"); return; } else if (serr != SocketError.Success) { OnError("Error In Receive, serr = " + serr); return; } // Copy Data Into Tokenizer tknz.Read(recvb, recvn); // Parse Data while (tknz.MoveToNext()) { try { ParseMessageAndRaiseEvents(tknz.Buffer(), tknz.Length); } catch (System.Exception ex) { string BadMessage = ByteArrayToStringClean(tknz.Buffer(), tknz.Length); string msg = string.Format("Exception in MDWrapper Parsing Message, Ex = {0}, Msg = {1}", ex.Message, BadMessage); OnError(msg); } } } And I kept seeing occasional errors in my parsing function indicating that the message wasn't valid. At first, I thought that my tokenizer class was broken. But after logging all the incoming bytes to the tokenizer, it turns out that the raw bytes in recvb weren't a valid message. I didn't think that corrupted data like this was possible with a tcp data stream. I figured it had to be some type of buffer overflow so I set sock.ReceiveBufferSize = 1024 * 1024 * 8; and the parsing error never, ever occurs in testing (it happens often enough to replicate if I don't change the ReceiveBufferSize). But my question is: why wasn't I seeing an exception or an error state or something if the socket's internal buffer was overflowing before I changed this buffer size?

    Read the article

  • Timer a usage in msp430 in high compiler optimization mode

    - by Vishal
    Hi, I have used timer A in MSP430 with high compiler optimization, but found that my timer code is failing when high compiler optimization used. When none optimization is used code works fine. This code is used to achieve 1 ms timer tick. timeOutCNT is increamented in interrupt. Following is the code, //Disable interrupt and clear CCR0 TIMER_A_TACTL = TIMER_A_TASSEL | // set the clock source as SMCLK TIMER_A_ID | // set the divider to 8 TACLR | // clear the timer MC_1; // continuous mode TIMER_A_TACTL &= ~TIMER_A_TAIE; // timer interrupt disabled TIMER_A_TACTL &= 0; // timer interrupt flag disabled CCTL0 = CCIE; // CCR0 interrupt enabled CCR0 = 500; TIMER_A_TACTL &= TIMER_A_TAIE; //enable timer interrupt TIMER_A_TACTL &= TIMER_A_TAIFG; //enable timer interrupt TACTL = TIMER_A_TASSEL + MC_1 + ID_3; // SMCLK, upmode timeOutCNT = 0; //timeOutCNT is increased in timer interrupt while(timeOutCNT <= 1); //delay of 1 milisecond TIMER_A_TACTL = TIMER_A_TASSEL | // set the clock source as SMCLK TIMER_A_ID | // set the divider to 8 TACLR | // clear the timer MC_1; // continuous mode TIMER_A_TACTL &= ~TIMER_A_TAIE; // timer interrupt disabled TIMER_A_TACTL &= 0x00; // timer interrupt flag disabled Can anybody help me here to resolve this issue? Is there any other way we can use timer A so it works fine in optimization modes? Or do I have used is wrongly to achieve 1 ms interrupt? Thanks in advanced. Vishal N

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >