Search Results

Search found 908 results on 37 pages for 'cascading deletes'.

Page 18/37 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Keeping folder of files in sync over 3 machines

    - by Wizzard
    Morning, Got 3 machines that have user content on them, which I need to keep in sync. This is a 3 way sync. Currently I run rsync but we just don't handle deletes. Have looked at something like gluster, but that seems a little over the top Any other software out there to do a 3 way sync, or a good network file system...? There is for web servers so we don't want a slow / IO hungry process. 3 servers... user content could be added to 1 and needs to be moved to other two.

    Read the article

  • Restricting server.log file size (minecraft) in CentOS

    - by MisdartedPenguin
    I'm currently running a bukkit (minecraft) server which generates a server.log file with all the console messages / errors. Every now and then I have a plugin (which i need) that crashes and can cause the server.log file size to increase dramatically. I've had it hit 32GB before which used all my disk space. Is there a way to make it a rolling log (deletes old errors) or be able to limit the file size so it can't go above say 10MB. But the solution needs to not affect how the server runs so it doesn't throw an error when it can't write anymore. Anyway of doing this with CentOS?

    Read the article

  • How do I sync a subset of tables between two databases on the same mysql database server

    - by Mike
    would like to be able to sync a subset of tables between two mysql databases that are running on the same server. One of the databases acts as the master where inserts, updates and deletes can be made. The second database uses those same tables for read-only operations. I do not want to use federated tables to achieve this. The long term goal will be to separate the 2 databases to multiple servers, The second database that has the subset of tables as read-only may also be replicated a few times over to distribute geographically for load and performance purposes each with unqiue data.... Once that is achieved, I plan to use binlog to replicate those specific tables on the secondary databases. In the meantime, I'd like to keep these tables in sync. Is there a more elegant way to do this than other than using a cronjob and mysqldump?

    Read the article

  • Google App Engine - Secure Cookies

    - by tponthieux
    I'd been searching for a way to do cookie based authentication/sessions in Google App Engine because I don't like the idea of memcache based sessions, and I also don't like the idea of forcing users to create google accounts just to use a website. I stumbled across someone's posting that mentioned some signed cookie functions from the Tornado framework and it looks like what I need. What I have in mind is storing a user's id in a tamper proof cookie, and maybe using a decorator for the request handlers to test the authentication status of the user, and as a side benefit the user id will be available to the request handler for datastore work and such. The concept would be similar to forms authentication in ASP.NET. This code comes from the web.py module of the Tornado framework. According to the docstrings, it "Signs and timestamps a cookie so it cannot be forged" and "Returns the given signed cookie if it validates, or None." I've tried to use it in an App Engine Project, but I don't understand the nuances of trying to get these methods to work in the context of the request handler. Can someone show me the right way to do this without losing the functionality that the FriendFeed developers put into it? The set_secure_cookie, and get_secure_cookie portions are the most important, but it would be nice to be able to use the other methods as well. #!/usr/bin/env python import Cookie import base64 import time import hashlib import hmac import datetime import re import calendar import email.utils import logging def _utf8(s): if isinstance(s, unicode): return s.encode("utf-8") assert isinstance(s, str) return s def _unicode(s): if isinstance(s, str): try: return s.decode("utf-8") except UnicodeDecodeError: raise HTTPError(400, "Non-utf8 argument") assert isinstance(s, unicode) return s def _time_independent_equals(a, b): if len(a) != len(b): return False result = 0 for x, y in zip(a, b): result |= ord(x) ^ ord(y) return result == 0 def cookies(self): """A dictionary of Cookie.Morsel objects.""" if not hasattr(self,"_cookies"): self._cookies = Cookie.BaseCookie() if "Cookie" in self.request.headers: try: self._cookies.load(self.request.headers["Cookie"]) except: self.clear_all_cookies() return self._cookies def _cookie_signature(self,*parts): self.require_setting("cookie_secret","secure cookies") hash = hmac.new(self.application.settings["cookie_secret"], digestmod=hashlib.sha1) for part in parts:hash.update(part) return hash.hexdigest() def get_cookie(self,name,default=None): """Gets the value of the cookie with the given name,else default.""" if name in self.cookies: return self.cookies[name].value return default def set_cookie(self,name,value,domain=None,expires=None,path="/", expires_days=None): """Sets the given cookie name/value with the given options.""" name = _utf8(name) value = _utf8(value) if re.search(r"[\x00-\x20]",name + value): # Don't let us accidentally inject bad stuff raise ValueError("Invalid cookie %r:%r" % (name,value)) if not hasattr(self,"_new_cookies"): self._new_cookies = [] new_cookie = Cookie.BaseCookie() self._new_cookies.append(new_cookie) new_cookie[name] = value if domain: new_cookie[name]["domain"] = domain if expires_days is not None and not expires: expires = datetime.datetime.utcnow() + datetime.timedelta( days=expires_days) if expires: timestamp = calendar.timegm(expires.utctimetuple()) new_cookie[name]["expires"] = email.utils.formatdate( timestamp,localtime=False,usegmt=True) if path: new_cookie[name]["path"] = path def clear_cookie(self,name,path="/",domain=None): """Deletes the cookie with the given name.""" expires = datetime.datetime.utcnow() - datetime.timedelta(days=365) self.set_cookie(name,value="",path=path,expires=expires, domain=domain) def clear_all_cookies(self): """Deletes all the cookies the user sent with this request.""" for name in self.cookies.iterkeys(): self.clear_cookie(name) def set_secure_cookie(self,name,value,expires_days=30,**kwargs): """Signs and timestamps a cookie so it cannot be forged""" timestamp = str(int(time.time())) value = base64.b64encode(value) signature = self._cookie_signature(name,value,timestamp) value = "|".join([value,timestamp,signature]) self.set_cookie(name,value,expires_days=expires_days,**kwargs) def get_secure_cookie(self,name,include_name=True,value=None): """Returns the given signed cookie if it validates,or None""" if value is None:value = self.get_cookie(name) if not value:return None parts = value.split("|") if len(parts) != 3:return None if include_name: signature = self._cookie_signature(name,parts[0],parts[1]) else: signature = self._cookie_signature(parts[0],parts[1]) if not _time_independent_equals(parts[2],signature): logging.warning("Invalid cookie signature %r",value) return None timestamp = int(parts[1]) if timestamp < time.time() - 31 * 86400: logging.warning("Expired cookie %r",value) return None try: return base64.b64decode(parts[0]) except: return None uid=1234|1234567890|d32b9e9c67274fa062e2599fd659cc14 Parts: 1. uid is the name of the key 2. 1234 is your value in clear 3. 1234567890 is the timestamp 4. d32b9e9c67274fa062e2599fd659cc14 is the signature made from the value and the timestamp

    Read the article

  • maven scm plugin deleting output folder in every execution

    - by Udo Fholl
    Hi all, I need to download from 2 different svn locations to the same output directory. So i configured 2 different executions. But every time it executes a checkout deletes the output directory so it also deletes the already downloaded projects. Here is a sample of my pom.xml: <profiles> <profile> <id>checkout</id> <activation> <property> <name>checkout</name> <value>true</value> </property> </activation> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-scm-plugin</artifactId> <version>1.3</version> <configuration> <username>${svn.username}</username> <password>${svn.pass}</password> <checkoutDirectory>${path}</checkoutDirectory> <skipCheckoutIfExists /> </configuration> <executions> <execution> <id>checkout_a</id> <configuration> <connectionUrl>scm:svn:https://host_n/folder</connectionUrl> <checkoutDirectory>${path}</checkoutDirectory> </configuration> <phase>process-resources</phase> <goals> <goal>checkout</goal> </goals> </execution> <execution> <id>checkout_b</id> <configuration> <connectionUrl>scm:svn:https://host_l/anotherfolder</connectionUrl> <checkoutDirectory>${path}</checkoutDirectory> </configuration> <phase>process-resources</phase> <goals> <goal>checkout</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> Is there any way to prevent the executions to delete the folder ${path} ? Thank you. PS: I cant format the pom.xml fragment correctly, sorry!

    Read the article

  • Create attribute in XML

    - by user560411
    Hello. I have the following php code that adds data into XML and works correctly. However, in my second step I will create a form that deletes some of the elements. The problem is that I want to add an ID number and then the PHP file will search for it and delete entire node. My question is how can i add an ID into CD for this to work ? For example ( <cd id="xxxx"> ) Also, any ideas or examples of a code that deletes CD having the ID would be appreciate. insert.php ( my index file with the form ) <h1>Playlist</h1> <form action="insert2.php" method="post"> <fieldset> <label for="TITLE">TITLE:</label><input type="text" id="title" name="title" /><br /> <label for="title">BAND:</label> <input type="text" id="band" name="band"/><br /> <label for="path">YEAR:</label> <input type="text" id="year" name="year" /> <br /> <input type="submit" /> </fieldset> </form> <h2>Current entries:</h2> <p>TITLE - BAND - YEAR</p> <?php $doc = new DOMDocument(); $doc->load( 'insert.xml' ); $CATEGORIES = $doc->getElementsByTagName( "CD" ); foreach( $CATEGORIES as $CD ) { $TITLES = $CD->getElementsByTagName( "TITLE" ); $TITLE = $TITLES->item(0)->nodeValue; $BANDS= $CD->getElementsByTagName( "BAND" ); $BAND= $BANDS->item(0)->nodeValue; $YEARS = $CD->getElementsByTagName( "YEAR" ); $YEAR = $YEARS->item(0)->nodeValue; echo "<b>$TITLE - $BAND - $YEAR\n</b><br>"; } ?> inser2.php ( the main code ) <?php $CD = array( 'TITLE' => $_POST['title'], 'BAND' => $_POST['band'], 'YEAR' => $_POST['year'], ); $doc = new DOMDocument(); $doc->load( 'insert.xml' ); $doc->formatOutput = true; $r = $doc->getElementsByTagName("CATEGORIES")->item(0); $b = $doc->createElement("CD"); $TITLE = $doc->createElement("TITLE"); $TITLE->appendChild( $doc->createTextNode( $CD["TITLE"] ) ); $b->appendChild( $TITLE ); $BAND = $doc->createElement("BAND"); $BAND->appendChild( $doc->createTextNode( $CD["BAND"] ) ); $b->appendChild( $BAND ); $YEAR = $doc->createElement("YEAR"); $YEAR->appendChild( $doc->createTextNode( $CD["YEAR"] ) ); $b->appendChild( $YEAR ); $r->appendChild( $b ); $doc->save("insert.xml"); ?> the XML file <?xml version="1.0" encoding="utf-8"?> <MY_CD> <CATEGORIES> <CD> <TITLE>NEVER MIND THE BOLLOCKS</TITLE> <BAND>SEX PISTOLS</BAND> <YEAR>1977</YEAR> </CD> <CD> <TITLE>NEVERMIND</TITLE> <BAND>NIRVANA</BAND> <YEAR>1991</YEAR> </CD> </CATEGORIES> </MY_CD>

    Read the article

  • Android Sqlite - obtaining the correct database row id

    - by Dan_Dan_Man
    I'm working on an app that allows the user to create notes while rehearsing a play. The user can view the notes they have created in a listview, and edit and delete them if they wish. Take for example the user creates 3 notes. In the database, the row_id's will be 1, 2 and 3. So when the user views the notes in the listview, they will also be in the order 1, 2, 3 (intially 0, 1, 2 before I increment the values). So the user can view and delete the correct row from the database. The problem arises when the user decides to delete a note. Say the user deletes the note in position 2. Thus our database will have row_id's 1 and 3. But in the listview, they will be in the position 1 and 2. So if the user clicks on the note in position 2 in the listview it should return the row in the database with row_id 3. However it tries to look for the row_id 2 which doesn't exist, and hence crashes. I need to know how to obtain the corresponding row_id, given the user's selection in the listview. Here is the code below that does this: // When the user selects "Delete" in context menu public boolean onContextItemSelected(MenuItem item) { AdapterContextMenuInfo info = (AdapterContextMenuInfo) item .getMenuInfo(); switch (item.getItemId()) { case DELETE_ID: deleteNote(info.id + 1); return true; } return super.onContextItemSelected(item); } // This method actually deletes the selected note private void deleteNote(long id) { Log.d(TAG, "Deleting row: " + id); mNDbAdapter.deleteNote(id); mCursor = mNDbAdapter.fetchAllNotes(); startManagingCursor(mCursor); fillData(); // TODO: Update play database if there are no notes left for a line. } // When the user clicks on an item, display the selected note protected void onListItemClick(ListView l, View v, int position, long id) { super.onListItemClick(l, v, position, id); viewNote(id, "", "", true); } // This is where we display the note in a custom alert dialog. I've ommited // the rest of the code in this method because the problem lies in this line: // "mCursor = mNDbAdapter.fetchNote(newId);" // I need to replace "newId" with the row_id in the database. private void viewNote(long id, String defaultTitle, String defaultNote, boolean fresh) { final int lineNumber; String title; String note; id++; final long newId = id; Log.d(TAG, "Returning row: " + newId); mCursor = mNDbAdapter.fetchNote(newId); lineNumber = (mCursor.getInt(mCursor.getColumnIndex("number"))); title = (mCursor.getString(mCursor.getColumnIndex("title"))); note = (mCursor.getString(mCursor.getColumnIndex("note"))); . . . } Let me know if you would like me to show anymore code. It seems like something so simple but I just can't find a solution. Thanks!

    Read the article

  • How to stop Excel 2003 from loading a Gazillion files

    - by Gary M. Mugford
    One of my soon-to-be-ex-friends got an Excel file from another friend of his and decided to click on it. It started opening all kinds of files from within Excel. Over 200 and still counting when he called me. I told him to go to task manager, which showed a LOT of files in the applications tab, but only one Excel.exe in the processes tab. Closing it down there, closed down Excel. I then CrossLooped in to see if I could give him a helping hand. Each time Excel was re-opened, the mass influx of files started. They were all kinds of files, PDFs, Docs, JPGs, even some spreadsheets. It looked like the end of solitaire, with multiple windows opening (XP) and the counter on the lone Excel button on the task bar counting off the files. I did the task manager exit routine and went looking for temp files. I CrapCleaned out the system. Made sure I went through the files created in the last hour and deleted anything with a temp anywhere in it. I also deleted the crappy infected/corrupted file from it's place on the desktop (yeah, I know, I yelled for 15 minutes on THAT subject). Despite a delousing, the restart of Excel, which complained of a deactivated add-in, would start the cascading windows, whether I answered yes or no to that question. Yes, it knew it had a serious crash, but why would it just keep on trying to load the bad file, even when I got rid of it? But here's the real question. WHERE was it loading from? I went through the backup folder and NOTHING was there! So what's the process for starting Excel WITHOUT it trying to do a crash recovery? Sort of makes me feel stupid at times. Thanks for any light you can shed on this issue. GM

    Read the article

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • Steps to debugging web site latency and timeout issues

    - by Paperjam
    I have a client who has multiple offices around the country, all of which share the same Internet connection via their WAN. One specific office for this client is experiencing severe latency and timeout issues with my web site. Most, but not all, of the latency occurs on a specific ASPX page where multiple postbacks are made while populating cascading dropdown lists (rapid form submits). The latency is sporadic and can be anywhere from a few seconds to a full timeout. There is no indication that the timeouts are occurring on the server's end. The IT guy for this client is having trouble narrowing down the problem. Since it is affecting only one location for one client, I am led to believe it is not something with my site but something specific to that location. He's measured ping times while using the site and has noticed no real variance in ping times even when the page has timed out. I believe this may be being caused by some sort of Internet filter that doesn't like rapid form submits, but beyond a hunch I haven't a clue. My question is what things should I tell the IT guy to look for? While I'm not trying to provide active tech support for this issue, I would at least like to glean an understanding of what is going on and try to offer some sort of advice. Thank you.

    Read the article

  • Firefox url / link to a group of saved bookmarks?

    - by This_Is_Fun
    In Firefox you can easily save a group of tabs together. When (re-)accessing this group, the 'cascading' bookmark menu shows each individual bookmark (and under a line) it says "open all in tabs" I'm looking for a way to launch those tabs without going up through the bookmark menu. Possible options: A) Record a simple macro w/ any number of "superuser" utilities* ('A' is not the preferred option, since many 'little-macros' are hard to keep track of) b) Use Autohotkey (similar to option 'A' and more flexible once you learn the basics) c) How does Firefox load all those tabs? The info must be stored somewhere (as a type of URL??) Quick Summary: The moment I click on "open all in tabs", I am clicking on something very similar to a hyper-link. How do I find the content (exact code) of that 'hyper-link', and / or "How do I easily launch the tabs?" .. . New EDIT #1: I'm looking for a way to launch those tabs without going up through the bookmark menu, or cluttering the bookmarks toolbar which I hide anyway :o) .. . New EDIT #2: I tried to keep the question simple and not mentioning Autohotkey programming. The objective is to launch all tabs using a button on an AHK gui. When grawity said, "It's just an ordinary folder containing ordinary bookmarks," he (she) reminds me I can easily find the folder / Now how to launch to urls inside that folder? .. FYI: (Basic-level) AHK works like this: ; Open one folder ButtonWinMerge_Files: Run, C:\Program Files\WinMerge\ Return .. ; Use default web browser for one link ButtonGoogle: Run, http://google.com Return .. . Question still open: The moment I click on "open all in tabs", I am clicking on something very similar to a hyper-link. "How to 'replicate' the way Firefox launches the tabs with one click?"

    Read the article

  • Unexpected behaviour in a Lotus Notes programmable table

    - by Mark B
    I'm designing a workflow database in Lotus Notes 6.0.3 (soon upgrading to 8.5), and my OS is Windows XP. I have recently tried converting a tabbed table into a programmable one. This was so that I could control which tab was displayed to the user when it was opened, so that they were presented with the most appropriate one for that document's progress through the workflow. That part of it works! One of the tabs features a radio button that controls visibility of the next tab, and a pair of cascading dialogue boxes. One contains the static list "Person":"Team", and the other has a formula based on the first: view:=@If(PeerReview = "Team"; "GroupNames"; "GroupMembers"); @Unique(@DbColumn(""; ""; view; 1)) The dialogue boxes have the property "Refresh fields on keyword change" selected. The behaviour that I wasn't expecting is this. If the radio button is set to "Yes" and a value is selected in one of the dialogue boxes, the table opens the next tab. If the radio button is set to "No" and a value is selected in one of the dialogue boxes, the entire table is hidden. I can duplicate the latter by switching off the "Refresh fields on keyword change" property on the dialogue boxes and instead pressing F9 after selecting a value. I have no idea why the former occurs, though. The table is called "RFCInfo", and I have a field on the form called "$RFCInfo" which is editable, hidden from all users who aren't me and initially set by a Postopen script, which I can post if necessary - it's essentially a Select Case statement that looks at a particular item value and returns the name of the table row relating to that value. Can anyone offer any pointers?

    Read the article

  • ASPNET WebAPI REST Guidance

    - by JoshReuben
    ASP.NET Web API is an ideal platform for building RESTful applications on the .NET Framework. While I may be more partial to NodeJS these days, there is no denying that WebAPI is a well engineered framework. What follows is my investigation of how to leverage WebAPI to construct a RESTful frontend API.   The Advantages of REST Methodology over SOAP Simpler API for CRUD ops Standardize Development methodology - consistent and intuitive Standards based à client interop Wide industry adoption, Ease of use à easy to add new devs Avoid service method signature blowout Smaller payloads than SOAP Stateless à no session data means multi-tenant scalability Cache-ability Testability   General RESTful API Design Overview · utilize HTTP Protocol - Usage of HTTP methods for CRUD, standard HTTP response codes, common HTTP headers and Mime Types · Resources are mapped to URLs, actions are mapped to verbs and the rest goes in the headers. · keep the API semantic, resource-centric – A RESTful, resource-oriented service exposes a URI for every piece of data the client might want to operate on. A REST-RPC Hybrid exposes a URI for every operation the client might perform: one URI to fetch a piece of data, a different URI to delete that same data. utilize Uri to specify CRUD op, version, language, output format: http://api.MyApp.com/{ver}/{lang}/{resource_type}/{resource_id}.{output_format}?{key&filters} · entity CRUD operations are matched to HTTP methods: · Create - POST / PUT · Read – GET - cacheable · Update – PUT · Delete - DELETE · Use Uris to represent a hierarchies - Resources in RESTful URLs are often chained · Statelessness allows for idempotency – apply an op multiple times without changing the result. POST is non-idempotent, the rest are idempotent (if DELETE flags records instead of deleting them). · Cache indication - Leverage HTTP headers to label cacheable content and indicate the permitted duration of cache · PUT vs POST - The client uses PUT when it determines which URI (Id key) the new resource should have. The client uses POST when the server determines they key. PUT takes a second param – the id. POST creates a new resource. The server assigns the URI for the new object and returns this URI as part of the response message. Note: The PUT method replaces the entire entity. That is, the client is expected to send a complete representation of the updated product. If you want to support partial updates, the PATCH method is preferred DELETE deletes a resource at a specified URI – typically takes an id param · Leverage Common HTTP Response Codes in response headers 200 OK: Success 201 Created - Used on POST request when creating a new resource. 304 Not Modified: no new data to return. 400 Bad Request: Invalid Request. 401 Unauthorized: Authentication. 403 Forbidden: Authorization 404 Not Found – entity does not exist. 406 Not Acceptable – bad params. 409 Conflict - For POST / PUT requests if the resource already exists. 500 Internal Server Error 503 Service Unavailable · Leverage uncommon HTTP Verbs to reduce payload sizes HEAD - retrieves just the resource meta-information. OPTIONS returns the actions supported for the specified resource. PATCH - partial modification of a resource. · When using PUT, POST or PATCH, send the data as a document in the body of the request. Don't use query parameters to alter state. · Utilize Headers for content negotiation, caching, authorization, throttling o Content Negotiation – choose representation (e.g. JSON or XML and version), language & compression. Signal via RequestHeader.Accept & ResponseHeader.Content-Type Accept: application/json;version=1.0 Accept-Language: en-US Accept-Charset: UTF-8 Accept-Encoding: gzip o Caching - ResponseHeader: Expires (absolute expiry time) or Cache-Control (relative expiry time) o Authorization - basic HTTP authentication uses the RequestHeader.Authorization to specify a base64 encoded string "username:password". can be used in combination with SSL/TLS (HTTPS) and leverage OAuth2 3rd party token-claims authorization. Authorization: Basic sQJlaTp5ZWFslylnaNZ= o Rate Limiting - Not currently part of HTTP so specify non-standard headers prefixed with X- in the ResponseHeader. X-RateLimit-Limit: 10000 X-RateLimit-Remaining: 9990 · HATEOAS Methodology - Hypermedia As The Engine Of Application State – leverage API as a state machine where resources are states and the transitions between states are links between resources and are included in their representation (hypermedia) – get API metadata signatures from the response Link header - in a truly REST based architecture any URL, except the initial URL, can be changed, even to other servers, without worrying about the client. · error responses - Do not just send back a 200 OK with every response. Response should consist of HTTP error status code (JQuery has automated support for this), A human readable message , A Link to a meaningful state transition , & the original data payload that was problematic. · the URIs will typically map to a server-side controller and a method name specified by the type of request method. Stuff all your calls into just four methods is not as crazy as it sounds. · Scoping - Path variables look like you’re traversing a hierarchy, and query variables look like you’re passing arguments into an algorithm · Mapping URIs to Controllers - have one controller for each resource is not a rule – can consolidate - route requests to the appropriate controller and action method · Keep URls Consistent - Sometimes it’s tempting to just shorten our URIs. not recommend this as this can cause confusion · Join Naming – for m-m entity relations there may be multiple hierarchy traversal paths · Routing – useful level of indirection for versioning, server backend mocking in development ASPNET WebAPI Considerations ASPNET WebAPI implements a lot (but not all) RESTful API design considerations as part of its infrastructure and via its coding convention. Overview When developing an API there are basically three main steps: 1. Plan out your URIs 2. Setup return values and response codes for your URIs 3. Implement a framework for your API.   Design · Leverage Models MVC folder · Repositories – support IoC for tests, abstraction · Create DTO classes – a level of indirection decouples & allows swap out · Self links can be generated using the UrlHelper · Use IQueryable to support projections across the wire · Models can support restful navigation properties – ICollection<T> · async mechanism for long running ops - return a response with a ticket – the client can then poll or be pushed the final result later. · Design for testability - Test using HttpClient , JQuery ( $.getJSON , $.each) , fiddler, browser debug. Leverage IDependencyResolver – IoC wrapper for mocking · Easy debugging - IE F12 developer tools: Network tab, Request Headers tab     Routing · HTTP request method is matched to the method name. (This rule applies only to GET, POST, PUT, and DELETE requests.) · {id}, if present, is matched to a method parameter named id. · Query parameters are matched to parameter names when possible · Done in config via Routes.MapHttpRoute – similar to MVC routing · Can alternatively: o decorate controller action methods with HttpDelete, HttpGet, HttpHead,HttpOptions, HttpPatch, HttpPost, or HttpPut., + the ActionAttribute o use AcceptVerbsAttribute to support other HTTP verbs: e.g. PATCH, HEAD o use NonActionAttribute to prevent a method from getting invoked as an action · route table Uris can support placeholders (via curly braces{}) – these can support default values and constraints, and optional values · The framework selects the first route in the route table that matches the URI. Response customization · Response code: By default, the Web API framework sets the response status code to 200 (OK). But according to the HTTP/1.1 protocol, when a POST request results in the creation of a resource, the server should reply with status 201 (Created). Non Get methods should return HttpResponseMessage · Location: When the server creates a resource, it should include the URI of the new resource in the Location header of the response. public HttpResponseMessage PostProduct(Product item) {     item = repository.Add(item);     var response = Request.CreateResponse<Product>(HttpStatusCode.Created, item);     string uri = Url.Link("DefaultApi", new { id = item.Id });     response.Headers.Location = new Uri(uri);     return response; } Validation · Decorate Models / DTOs with System.ComponentModel.DataAnnotations properties RequiredAttribute, RangeAttribute. · Check payloads using ModelState.IsValid · Under posting – leave out values in JSON payload à JSON formatter assigns a default value. Use with RequiredAttribute · Over-posting - if model has RO properties à use DTO instead of model · Can hook into pipeline by deriving from ActionFilterAttribute & overriding OnActionExecuting Config · Done in App_Start folder > WebApiConfig.cs – static Register method: HttpConfiguration param: The HttpConfiguration object contains the following members. Member Description DependencyResolver Enables dependency injection for controllers. Filters Action filters – e.g. exception filters. Formatters Media-type formatters. by default contains JsonFormatter, XmlFormatter IncludeErrorDetailPolicy Specifies whether the server should include error details, such as exception messages and stack traces, in HTTP response messages. Initializer A function that performs final initialization of the HttpConfiguration. MessageHandlers HTTP message handlers - plug into pipeline ParameterBindingRules A collection of rules for binding parameters on controller actions. Properties A generic property bag. Routes The collection of routes. Services The collection of services. · Configure JsonFormatter for circular references to support links: PreserveReferencesHandling.Objects Documentation generation · create a help page for a web API, by using the ApiExplorer class. · The ApiExplorer class provides descriptive information about the APIs exposed by a web API as an ApiDescription collection · create the help page as an MVC view public ILookup<string, ApiDescription> GetApis()         {             return _explorer.ApiDescriptions.ToLookup(                 api => api.ActionDescriptor.ControllerDescriptor.ControllerName); · provide documentation for your APIs by implementing the IDocumentationProvider interface. Documentation strings can come from any source that you like – e.g. extract XML comments or define custom attributes to apply to the controller [ApiDoc("Gets a product by ID.")] [ApiParameterDoc("id", "The ID of the product.")] public HttpResponseMessage Get(int id) · GlobalConfiguration.Configuration.Services – add the documentation Provider · To hide an API from the ApiExplorer, add the ApiExplorerSettingsAttribute Plugging into the Message Handler pipeline · Plug into request / response pipeline – derive from DelegatingHandler and override theSendAsync method – e.g. for logging error codes, adding a custom response header · Can be applied globally or to a specific route Exception Handling · Throw HttpResponseException on method failures – specify HttpStatusCode enum value – examine this enum, as its values map well to typical op problems · Exception filters – derive from ExceptionFilterAttribute & override OnException. Apply on Controller or action methods, or add to global HttpConfiguration.Filters collection · HttpError object provides a consistent way to return error information in the HttpResponseException response body. · For model validation, you can pass the model state to CreateErrorResponse, to include the validation errors in the response public HttpResponseMessage PostProduct(Product item) {     if (!ModelState.IsValid)     {         return Request.CreateErrorResponse(HttpStatusCode.BadRequest, ModelState); Cookie Management · Cookie header in request and Set-Cookie headers in a response - Collection of CookieState objects · Specify Expiry, max-age resp.Headers.AddCookies(new CookieHeaderValue[] { cookie }); Internet Media Types, formatters and serialization · Defaults to application/json · Request Accept header and response Content-Type header · determines how Web API serializes and deserializes the HTTP message body. There is built-in support for XML, JSON, and form-urlencoded data · customizable formatters can be inserted into the pipeline · POCO serialization is opt out via JsonIgnoreAttribute, or use DataMemberAttribute for optin · JSON serializer leverages NewtonSoft Json.NET · loosely structured JSON objects are serialzed as JObject which derives from Dynamic · to handle circular references in json: json.SerializerSettings.PreserveReferencesHandling =    PreserveReferencesHandling.All à {"$ref":"1"}. · To preserve object references in XML [DataContract(IsReference=true)] · Content negotiation Accept: Which media types are acceptable for the response, such as “application/json,” “application/xml,” or a custom media type such as "application/vnd.example+xml" Accept-Charset: Which character sets are acceptable, such as UTF-8 or ISO 8859-1. Accept-Encoding: Which content encodings are acceptable, such as gzip. Accept-Language: The preferred natural language, such as “en-us”. o Web API uses the Accept and Accept-Charset headers. (At this time, there is no built-in support for Accept-Encoding or Accept-Language.) · Controller methods can take JSON representations of DTOs as params – auto-deserialization · Typical JQuery GET request: function find() {     var id = $('#prodId').val();     $.getJSON("api/products/" + id,         function (data) {             var str = data.Name + ': $' + data.Price;             $('#product').text(str);         })     .fail(         function (jqXHR, textStatus, err) {             $('#product').text('Error: ' + err);         }); }            · Typical GET response: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Mon, 18 Jun 2012 04:30:33 GMT X-AspNet-Version: 4.0.30319 Cache-Control: no-cache Pragma: no-cache Expires: -1 Content-Type: application/json; charset=utf-8 Content-Length: 175 Connection: Close [{"Id":1,"Name":"TomatoSoup","Price":1.39,"ActualCost":0.99},{"Id":2,"Name":"Hammer", "Price":16.99,"ActualCost":10.00},{"Id":3,"Name":"Yo yo","Price":6.99,"ActualCost": 2.05}] True OData support · Leverage Query Options $filter, $orderby, $top and $skip to shape the results of controller actions annotated with the [Queryable]attribute. [Queryable]  public IQueryable<Supplier> GetSuppliers()  · Query: ~/Suppliers?$filter=Name eq ‘Microsoft’ · Applies the following selection filter on the server: GetSuppliers().Where(s => s.Name == “Microsoft”)  · Will pass the result to the formatter. · true support for the OData format is still limited - no support for creates, updates, deletes, $metadata and code generation etc · vnext: ability to configure how EditLinks, SelfLinks and Ids are generated Self Hosting no dependency on ASPNET or IIS: using (var server = new HttpSelfHostServer(config)) {     server.OpenAsync().Wait(); Tracing · tracability tools, metrics – e.g. send to nagios · use your choice of tracing/logging library, whether that is ETW,NLog, log4net, or simply System.Diagnostics.Trace. · To collect traces, implement the ITraceWriter interface public class SimpleTracer : ITraceWriter {     public void Trace(HttpRequestMessage request, string category, TraceLevel level,         Action<TraceRecord> traceAction)     {         TraceRecord rec = new TraceRecord(request, category, level);         traceAction(rec);         WriteTrace(rec); · register the service with config · programmatically trace – has helper extension methods: Configuration.Services.GetTraceWriter().Info( · Performance tracing - pipeline writes traces at the beginning and end of an operation - TraceRecord class includes aTimeStamp property, Kind property set to TraceKind.Begin / End Security · Roles class methods: RoleExists, AddUserToRole · WebSecurity class methods: UserExists, .CreateUserAndAccount · Request.IsAuthenticated · Leverage HTTP 401 (Unauthorized) response · [AuthorizeAttribute(Roles="Administrator")] – can be applied to Controller or its action methods · See section in WebApi document on "Claim-based-security for ASP.NET Web APIs using DotNetOpenAuth" – adapt this to STS.--> Web API Host exposes secured Web APIs which can only be accessed by presenting a valid token issued by the trusted issuer. http://zamd.net/2012/05/04/claim-based-security-for-asp-net-web-apis-using-dotnetopenauth/ · Use MVC membership provider infrastructure and add a DelegatingHandler child class to the WebAPI pipeline - http://stackoverflow.com/questions/11535075/asp-net-mvc-4-web-api-authentication-with-membership-provider - this will perform the login actions · Then use AuthorizeAttribute on controllers and methods for role mapping- http://sixgun.wordpress.com/2012/02/29/asp-net-web-api-basic-authentication/ · Alternate option here is to rely on MVC App : http://forums.asp.net/t/1831767.aspx/1

    Read the article

  • Implementing Audit Trail- Spring AOP vs.Hibernate Interceptor vs DB Trigger

    - by RN
    I found couple of discussion threads on this- but nothing which brought a comparison of all three mechanism under one thread. So here is my question... I need to audit DB changes- insert\updates\deletes to business objects. I can think of three ways to do this 1) DB Triggers 2) Hibernate interceptors 3) Spring AOP (This question is specific to a Spring\Hibernate\RDBMS- I guess this is neutral to java\c# or hibernate\nhibernate- but if your answer is dependent upon C++ or Java or specific implementation of hibernate- please specify) What are the pros and cons of selecting one of these strategies ? I am not asking for implementation details.-This is a design discussion. I am hoping we can make this as a part of community wiki

    Read the article

  • Custom app_offline.htm file during publish

    - by Charlino
    When I publish my ASP.NET MVC application it generates a app_offline.htm file to take the site offline while it updates the website and then deletes the file once the publish is successful. This is cool and I really like the idea, but I want to create my own custom app_offline.htm file that the publish action is aware of and put it somewhere where it doesn't effect my development site - i.e. it doesn't sit in the root of my development site rendering it offline all the time. TIA, Charles EDIT: From the comments on Scott Gu's post about app_offline.htm, it seems that customization of the app_offline.htm file wasn't possible with VS 2005 - has this changed with VS 2008 and now VS 2010?

    Read the article

  • django modelformset - one form per related table row

    - by Toby
    Hello, I have two models: class Model1(): name = CharField() url = CharField() class Model2(): model1 = ForeignKey(Model1) user = ForeignKey(User) zzz = CharField() There are 5 rows for model1 in the database, these are fixed and will rarely change. I need to display a formset for model2 that allows users to enter the zzz value, the formset must always show one form per row in the model1 table, the label for each form in the formset must be the name of the related model1. If the user deletes a model2 in the formset the next time the page loads it will render an empty zzz value for that form and the user must be able to edit the previous zzz value - meaning it must be pre populated with all model2 rows associated with the user. The idea is to print each row in the model1 table as a form instead of the user selecting the related model1 name in a select box. I know its not that complicated, but I'm seriously stumped and keep going round in circles!! Many thanks in advance. Similar to http://stackoverflow.com/questions/298779/form-or-formset-to-handle-multiple-table-rows-in-django

    Read the article

  • Python script repeated auto start up.

    - by Ali
    I am designing a python web app, where people can have an email sent to them on a particular day. So a user puts in his emai and date in a form and it gets stored in my database. My script would then search through the database looking for all records of todays date, retrive the email, sends them out and deletes the entry from the table. Is it possible to have a setup, where the script starts up automatically at a give time, say 1 pm everyday, sends out the email and then quits? If I have a continuously running script, i might go over the CPU limit of my shared web hosting. Or is the effect negligible? Ali

    Read the article

  • Changelog file: YAML vs JSON vs CSV

    - by Aziz Light
    Hello everybody. I am creating a simple Changelog lib in CodeIgniter that will basically log a message everytime someone adds, deletes, changes or publish a blog post. I will log messages in files by batches of 300. So every 301st message will go in a new file. At first I wanted to write the logs to simple .log files but then I got the idea to actually style the thing and I had to seperate each "attribute" of each message (ie: the user, the message, the type of the log, etc.). So .log files are out of the question since extracting the info would be a pain. What is the most appropriate format for such a task? I already ruled out MySQL and XML because they are too heavy (especially considering that the log files won't exceed (about) 300 lines). I suggested YAML vs JSON vs CSV in the title, but is there yet a better alternative?

    Read the article

  • Modifying ChangeSet in RIA Services

    - by Mohit
    Hi, I am using RIA Services Beta 2 with Linq2Sql and SL3. In my SL3, I have a datagrid where I can do some mappings of data (Updates, Inserts and Deletes). I override the Submit method when SubmitChanges() is called. In the submit method in the domain service, I do some validation. If a validation fails for a particular ChangeSetEntry in the ChangeSet, a ValidationErrors is added. Then I call the base.Submit(changeSet). So, if the changeset has 3 entities and one of the entities results in validation error, the other 2 entities are also rolled back. It looks like, RIA Services does an implicit transaction and hence it either submits all 3 or none even if 2 out of 3 does not have any validation error. Is there a way for the RIA service, to prevent rollback of the valid entities and only invalidate the ones that has validation failed. Inputs will be appreciated. Thanks Mohit

    Read the article

  • Windows batch file to delete .svn files and folders

    - by Marco Demaio
    Hi,in order to delete all ".svn" files/folders/subfolders in "myfolder" I use this simple line in a batch file: FOR /R myfolder %%X IN (.svn) DO (RD /S /Q "%%X") This works, but if there are no ".svn" files/folders the batch file shows a warning saying: "The system cannot find the file specified." This warning is very noisy so I was wondering how to make it understand that if it doesn't find any ".svn" files/folders he must skip the RD command. Usually using wild cards would suffice, but in this case I don't know how to use them, because I don't want to delete files/folders with .svn extension, but I want to delete the files/folders named exactly ".svn", so if I do this: FOR /R myfolder %%X IN (*.svn) DO (RD /S /Q "%%X") it would NOT delete files/folders named extaclty ".svn" anymore. I tried also this: FOR /R myfolder %%X IN (.sv*) DO (RD /S /Q "%%X") but it doesn't work either, he deletes nothing. Thanks for any help!

    Read the article

  • Netbeans not Deploying/Debugging

    - by Triztian
    Hello all, I have this problem, it's the second time it happens to me, I'm using the NB IDE 6.9.1 and after a while I was working fine, but then It stopped deploying my webapp when debugging or running the application, I don't recall messing with the ant build scripts or any strange configurations, it successfully builds the dist folder and *.war file and I am able to deploy it manually, but when I run it through the IDE it deletes the webapp folder webapps/myapp,so I cannot debug it or run it, all I get is ClassNotFoundException for my servlets. I am unsure of what extra information would help to solve this issue Its pretty annoying, I don't know how it got solved the last time, it just "went away" any help is truly appreciated.

    Read the article

  • Proper way of deleting records with Codeigniter

    - by luckytaxi
    I came across another Stackoverflow post regarding Get vs Post and it made me think. With CI, my URL for deleting a record is http://domain.com/item/delete/100, which deletes record id 100 from my DB. The record_id is pulled via $this->uri->segment. In my model I do have a where clause that checks that the user is indeed the owner of that record. A user_id is stored in a session inside the DB. Is that good enough? My understanding is, POST should be used for one time modification for data and GET is for retrieving regards (e.g. viewing an item or permalink).

    Read the article

  • Axapta: Is it possible to move AOT nodes programatically?

    - by axapter
    Is it possbile to move aotnode in axapta through code(I want to achive the same movement as done via alt-up, alt-down) Dynamics AX 2009 has AOTmove method, but when I try #AOT ProjectNode root; //SysContextMenuAOT ctx = new SysContextMenuAOT(); ProjectGroupNode firstChild; ProjectGroupNode secondChild; ; //root=ctx.first(); root = infolog.projectRootNode().AOTfindChild("Private").AOTfindChild("TestProject"); root = root.getRunNode(); firstChild = root.AOTfirstChild(); secondChild = firstChild.AOTnextSibling(); secondChild = firstChild.AOTnextSibling(); secondChild.AOTMove(secondChild.AOTparent()); and then call it on whole project it successfully moves secondChildNode, BUT it deletes every subnode inside of secondChild.

    Read the article

  • SharePoint itemDeleting evert is not working

    - by Clodin
    Hi! I have a site in SharePoint and I want to custom delete from a list. So, I'm creating the public class ListItemEventReceiver : SPItemEventReceiver { public override void ItemDeleting(SPItemEventProperties properties) { if (properties.ListTitle.Equals("Projects List")) { Projects pr = new Projects(); string projectName = properties.ListItem["Project Name"].ToString(); pr.DeleteProject(projectName); } } } Where 'Projects' class has 'DeleteProject' method who deletes the item. But it's doing nothing :( I mention that everything it's ok in Feature.xml Where am I wrong?

    Read the article

  • MySQL Triggers - How to capture external web variables? (eg. web username, ip)

    - by Ken
    Hi, I'm looking to create an audit trail for my PHP web app's database (eg. capture inserts, updates, deletes). MySQL triggers seem to be just the thing -- but how do I capture the IP address and the web username (as opposed to the mysql username, localhost) of the user who invoked the trigger? Thanks so much. -Ken P.S. I'm working with this example code I found: DROP TRIGGER IF EXISTS history_trigger $$ CREATE TRIGGER history_trigger BEFORE UPDATE ON clients FOR EACH ROW BEGIN IF OLD.first_name != NEW.first_name THEN INSERT INTO history_clients ( client_id , col , value , user_id , edit_time ) VALUES ( NEW.client_id, 'first_name', NEW.first_name, NEW.editor_id, NEW.last_mod ); END IF; IF OLD.last_name != NEW.last_name THEN INSERT INTO history_clients ( client_id , col , value , user_id , edit_time ) VALUES ( NEW.client_id, 'last_name', NEW.last_name, NEW.editor_id, NEW.last_mod ); END IF; END; $$

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >