Search Results

Search found 64049 results on 2562 pages for 'youtube net api'.

Page 243/2562 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • Error during GENERAL_REQUEST_ENTITY for POST results in ASP .NET session state never getting unlocked

    - by Jesse
    I have been trying to chase down the root cause of a condition where ASP .NET session state remains locked after a web request has been terminated due to an unexpected error. We use the SQL Server session state provider for session because we have several servers in a web farm. This issue first presented itself in the form of many requests getting stuck on the 'AcquireRequestState' event of their lifecycle for no apparent reason. I was able to finding corresponding entries for these requests in the session state database in SQL server that were all locked (column Locked = 1). I was also able to correlate these requests to entries in the IIS log with HTTP status codes of 500 (with a sub status of 0). These findings lead me to believe that, in some cases, a request was erroring out but was NOT releasing its lock on session state like it should. I enabled Failed Request Tracing in IIS for the website in question for status code 500 with all available providers selected each with the 'Verbose' setting for verbosity. I've since gathered several failed traces that have caused permanently locked ASP .NET sessions. They all share the same characteristics: They are all 'POST' requests where the browser is posting data to be processed/saved. They all have events indicating that the 'Session' module was invoked during the REQUEST_ACQUIRE_STATE event. At this point the request would have marked the row in the session state database as being "locked". This is normal and expected. They all have GENERAL_READ_ENTITY_START, GENERAL_READ_ENTITY_END, and GENERAL_REQUEST_ENTITY entries that appear to be reading in the data that was posted to the server as part of the request. This appears to be a buffered operation as these events get repeated over and over with each one reading in some subset of the posted data. At some point during the 'read entity' related events and error occurs. Some have the error code "Incorrect function. (0x80070001)" and others have "The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)". Once the error has been encountered, they all jump directly to the END_REQUEST events. The issue here is that, under normal circumstances, there should be a RELEASE_REQUEST_STATE event that will allow the Session module to release the lock it has on the session. This event is being skipped in this scenario. Just to be sure, I enabled failed request tracing for the '200' status code as well and generated several traces of successful requests that do have the RELEASE_REQUEST_STATE event being handled by the Session module. My theory at this point is that some kind of network issue is causing the 'Incorrect function' and 'I/O operation has been aborted because of either a thread exit or an application request' errors, but I don't understand why this seems to be causing the request handling to skip over the RELEASE_REQUEST_STATE event. If the request went through REQUEST_ACQUIRE_STATE it seems like it should also hit RELEASE_REQUEST_STATE as well. I'm loathe to say that this is a bug in IIS or ASP .NET, but it certainly appears that way to me at this point. Are there any configuration changes I could make to help ensure that 'RELEASE_REQUEST_STATE' is fired under all error conditions?

    Read the article

  • Windows - CPU power management APIs

    - by iulianchira
    What APIs are provided by Windows for CPU power management (I'm interested in CPU frequency scaling, setting min and max CPU frequency - similar to what you can do in Control Panel in power plans, but in a programmatic way). I'm also interested in .Net APIs.

    Read the article

  • Youtube video processing on iPhone

    - by Chonch
    Hey In my iPhone app, I want to load a video from youtube, perform some basic image processing on it and display it to the user. I am using openCV to do my image processing, and I know I can use it for grabbing all of the frames from the video as well (cvRetrieveFrame). My only problem is that cvRetrieveFrame expects to get its source of type CvCapture* which can be created either by a camera source or by a file name. How can I set a youtube video file as the source for CvCapture*? Thanks,

    Read the article

  • Google Base Query Problems

    - by Craig
    I am querying Google Base using the .NET library pretty much as described on this page. http://code.google.com/apis/base/docs/2.0/developers_guide_dotnet.html When I run the query a GBaseFeed is returned and it will usually have the TotalRecords property set to something like 35, but in the Entries collection it will often have no items or very few items. Other times the query returns all 35 items as expected in the Entries. Has anyone seen this behaviour or have any idea what could cause it?

    Read the article

  • PHP validate youtube URL

    - by Fero
    Hi all Could anyone kindly tell me how to validate the YOUTUBE URL. using PHP Example: For this URL how can we Validate: www.youtube.com/watch?v=GeppLPQtihA Thanks in Advance.. FEro

    Read the article

  • play youtube video in WebView

    - by Ionic Walrus
    Hi all, In my android app I have a WebView to display html data from our website. Sometimes the page will have youtube embed objects. This doesn't show up properly in the app. Is there any way to show/play youtube videos in WebView ? Thanks.

    Read the article

  • WCF web service: response is 200/ok, but response body is empty

    - by user1021224
    I am creating a WCF web api service. My problem is that some methods return a 200/OK response, but the headers and the body are empty. In setting up my web service, I created an ADO.NET Entity Data Model. I chose ADO.NET DbContext Generator when I added a code generation item. In the Model.tt document, I changed HashSet and ICollection to List. I built my website. It used to be that when I coded a method to return a List of an entity (like List<Customer> or List<Employee> in the Northwind database), it worked fine. Over time, I could not return a List of any of those, and could only grab one entity. Now, it's gotten to a point where I can return a List<string> or List<int>, but not a List or an instance of any entity. When I try to get a List<AnyEntity>, the response is 200/OK, but the response headers and body are empty. I have tried using the debugger and Firefox's Web Console. Using FF's WC, I could only get an "undefined" status code. I am not sure where to go from here. EDIT: In trying to grab all Areas from the database, I do this: [WebGet(UriTemplate = "areas")] public List<a1Areas> AllAreas() { return context.a1Areas.ToList(); } I would appreciate any more methods for debugging this. Thanks in advance. Found the answer, thanks to Merlyn! In my Global.asax file, I forgot to comment out two lines that took care of proxies and disposing of my context object. The code is below: void Application_BeginRequest(object sender, EventArgs e) { var context = new AssignmentEntities(); context.Configuration.ProxyCreationEnabled = false; HttpContext.Current.Items["_context"] = context; } void Application_EndRequest(object sender, EventArgs e) { var context = HttpContext.Current.Items["_context"] as AssignmentEntities; if (context != null) { context.Dispose(); } }

    Read the article

  • contenteditable realtime replace youtube url

    - by pimz
    so the problem is, i have a contenteditable div, with a keyup function binded. everytime somebody puts a youtube url in it, it has to be replaced by an embedded movie. i came up with a regex like this : content.match(/http:\/\/\w{0,3}.?youtube+\.\w{2,3}\/watch\?v=.*?(?=\s)/g); firefox wil do the replace after a whitespace, but in ie it won't work. any suggestions? thnx in advance!

    Read the article

  • php:unable to download youtube video using phptube class

    - by I Like PHP
    i m using phptube class for downloading you tube video. from this site In code i paste youtube url on a input box but there is errors below. Warning: file_get_contents(http://www.youtube.com/get_video?video_id=&t=) [function.file-get-contents]: failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found and Warning: file_put_contents(./flvs/3Hx9VsqMUug.flv) [function.file-put-contents]: failed to open stream: No such file or directory in E:\xampp\htdocs\vdo\utube \functions.php on line 19 path:./flvs/3Hx9VsqMUug.flv please tell me where is problem???

    Read the article

  • How to reserve public API to internal usage in .NET?

    - by mark
    Dear ladies and sirs. Let me first present the case, which will explain my question. This is going to be a bit long, so I apologize in advance :-). I have objects and collections, which should support the Merge API (it is my custom API, the signature of which is immaterial for this question). This API must be internal, meaning only my framework should be allowed to invoke it. However, derived types should be able to override the basic implementation. The natural way to implement this pattern as I see it, is this: The Merge API is declared as part of some internal interface, let us say IMergeable. Because the interface is internal, derived types would not be able to implement it directly. Rather they must inherit it from a common base type. So, a common base type is introduced, which would implement the IMergeable interface explicitly, where the interface methods delegate to respective protected virtual methods, providing the default implementation. This way the API is only callable by my framework, but derived types may override the default implementation. The following code snippet demonstrates the concept: internal interface IMergeable { void Merge(object obj); } public class BaseFrameworkObject : IMergeable { protected virtual void Merge(object obj) { // The default implementation. } void IMergeable.Merge(object obj) { Merge(obj); } } public class SomeThirdPartyObject : BaseFrameworkObject { protected override void Merge(object obj) { // A derived type implementation. } } All is fine, provided a single common base type suffices, which is usually true for non collection types. The thing is that collections must be mergeable as well. Collections do not play nicely with the presented concept, because developers do not develop collections from the scratch. There are predefined implementations - observable, filtered, compound, read-only, remove-only, ordered, god-knows-what, ... They may be developed from scratch in-house, but once finished, they serve wide range of products and should never be tailored to some specific product. Which means, that either: they do not implement the IMergeable interface at all, because it is internal to some product the scope of the IMergeable interface is raised to public and the API becomes open and callable by all. Let us refer to these collections as standard collections. Anyway, the first option screws my framework, because now each possible standard collection type has to be paired with the respective framework version, augmenting the standard with the IMergeable interface implementation - this is so bad, I am not even considering it. The second option breaks the framework as well, because the IMergeable interface should be internal for a reason (whatever it is) and now this interface has to open to all. So what to do? My solution is this. make IMergeable public API, but add an extra parameter to the Merge method, I call it a security token. The interface implementation may check that the token references some internal object, which is never exposed to the outside. If this is the case, then the method was called from within the framework, otherwise - some outside API consumer attempted to invoke it and so the implementation can blow up with a SecurityException. Here is the modified code snippet demonstrating this concept: internal static class InternalApi { internal static readonly object Token = new object(); } public interface IMergeable { void Merge(object obj, object token); } public class BaseFrameworkObject : IMergeable { protected virtual void Merge(object obj) { // The default implementation. } public void Merge(object obj, object token) { if (!object.ReferenceEquals(token, InternalApi.Token)) { throw new SecurityException("bla bla bla"); } Merge(obj); } } public class SomeThirdPartyObject : BaseFrameworkObject { protected override void Merge(object obj) { // A derived type implementation. } } Of course, this is less explicit than having an internally scoped interface and the check is moved from the compile time to run time, yet this is the best I could come up with. Now, I have a gut feeling that there is a better way to solve the problem I have presented. I do not know, may be using some standard Code Access Security features? I have only vague understanding of it, but can LinkDemand attribute be somehow related to it? Anyway, I would like to hear other opinions. Thanks.

    Read the article

  • How to create 1280x720 music video with static pic

    - by monov
    I wanna upload a song to youtube, and put a static 1280x720 pic as the video. I'd like the format to be one of the recommended ones. I tried Windows Movie Maker 2.6 but it only generated a 640x480 video. I also tried Windows Live Movie Maker but it put a big black margin around my pic (and unexplicably, produced a video with a slightly lower volume). Do you know any way to do what I need?

    Read the article

  • some flash applications does not work on ubuntu (9.10)

    - by Itay Moav
    Some flash sites does not work well on my computer (Ubuntu 9.10) Example: youtube.com - can't hear sounds http://animesquish.org/anime/queens-blade-heir-to-the-throne-episode-01/ - I see only the first second of each movie and then it freezes. What am I missing? Here is the output of dpkg -l | grep flash ii flashplugin-installer 10.0.42.34ubuntu0.9.10.1 Adobe Flash Player plugin installer ii flashplugin-nonfree-extrasound 0.0.svn2431-3 Adobe Flash Player platform support library

    Read the article

  • Some Flash applications do not work on Ubuntu (9.10)

    - by Itay Moav
    Some Flash sites do not work well on my computer (Ubuntu 9.10). Example: youtube.com - can't hear sounds. http://animesquish.org/anime/queens-blade-heir-to-the-throne-episode-01/ - I see only the first second of each movie and then it freezes. What am I missing? Here is the output of dpkg -l | grep flash: ii flashplugin-installer 10.0.42.34ubuntu0.9.10.1 Adobe Flash Player plugin installer ii flashplugin-nonfree-extrasound 0.0.svn2431-3 Adobe Flash Player platform support library

    Read the article

  • video uploading software

    - by Pennf0lio
    Are there software that lets you upload videos to video hosting sites (youtube,googlevideos, megavideo, etc)? with features like scheduling upload, queuing of videos to upload, multiple sites to upload. etc. Any software with similar capabilities would be a help. Thanks!

    Read the article

  • Oracle HRMS API – Delete Employee Element Entry

    - by PRajkumar
    API --  pay_element_entry_api.delete_element_entry    Example -- Consider Employee has Element Entry "Bonus". Lets try to Delete Element Entry "Bonus" using delete API     DECLARE       ld_effective_start_date            DATE;       ld_effective_end_date             DATE;       lb_delete_warning                   BOOLEAN;       ln_object_version_number    PAY_ELEMENT_ENTRIES_F.OBJECT_VERSION_NUMBER%TYPE := 1; BEGIN       -- Delete Element Entry       -- -------------------------------         pay_element_entry_api.delete_element_entry         (    -- Input data elements              -- ------------------------------              p_datetrack_delete_mode    => 'DELETE',              p_effective_date                      => TO_DATE('23-JUNE-2011'),              p_element_entry_id               => 118557,              -- Output data elements              -- --------------------------------              p_object_version_number   => ln_object_version_number,              p_effective_start_date           => ld_effective_start_date,              p_effective_end_date            => ld_effective_end_date,              p_delete_warning                  => lb_delete_warning         );    COMMIT; EXCEPTION         WHEN OTHERS THEN                           ROLLBACK;                           dbms_output.put_line(SQLERRM); END; / SHOW ERR;  

    Read the article

  • Entity Framework and distributed Systems

    - by Dirk Beckmann
    I need some help or maybe only a hint for the right direction. I've got a system that is sperated into two applications. An existing VB.NET desktop client using Entity Framework 5 with code first approach and a asp.net Web Api client in C# that will be refactored right yet. It should be possible to deliver OData. The system and the datamodel is still involving and so migrations will happen in undefined intervalls. So I'm now struggling how to manage my database access on the web api system. So my favourd approch would be us Entity Framework on both systems but I'm running into trouble while creating new migrations. Two solutions I've thought about: Shared Data Access dll The first idea was to separate the data access layer to a seperate project an reference from each of the systems. The context would be the same as long as the dll is up to date in each system. This way both soulutions would be able to make a migration. The main problem ist that it is much more complicate to update a web api system than it is with the client Click Once Update Solution and not every migration is important for the web api. This would couse more update trouble and out of sync libraries Database First on Web Api The second idea was just to use the database first approch an on web api side. But it seems that all annotations will be lost by each model update. Other solutions with stored procedures have been discarded because of missing OData support and maintainability. Does anyone run into same conflicts or has any advices how such a problem can be solved!

    Read the article

  • JEP 124: Enhance the Certificate Revocation-Checking API

    - by smullan
    Revocation checking is the mechanism to determine the revocation status of a certificate. If it is revoked, it is considered invalid and should not be used. Currently as of JDK 7, the PKIX implementation of java.security.cert.CertPathValidator  includes a revocation checking implementation that supports both OCSP and CRLs, the two main methods of checking revocation. However, there are very few options that allow you to configure the behavior. You can always implement your own revocation checker, but that's a lot of work. JEP 124 (Enhance the Certificate Revocation-Checking API) is one of the 11 new security features in JDK 8. This feature enhances the java.security.cert API to support various revocation settings such as best-effort checking, end-entity certificate checking, and mechanism-specific options and parameters. Let's describe each of these in more detail and show some examples. The features are provided through a new class named PKIXRevocationChecker. A PKIXRevocationChecker instance is returned by a PKIX CertPathValidator as follows: CertPathValidator cpv = CertPathValidator.getInstance("PKIX"); PKIXRevocationChecker prc = (PKIXRevocationChecker)cpv.getRevocationChecker(); You can now set various revocation options by calling different methods of the returned PKIXRevocationChecker object. For example, the best-effort option (called soft-fail) allows the revocation check to succeed if the status cannot be obtained due to a network connection failure or an overloaded server. It is enabled as follows: prc.setOptions(Enum.setOf(Option.SOFT_FAIL)); When the SOFT_FAIL option is specified, you can still obtain any exceptions that may have been thrown due to network issues. This can be useful if you want to log this information or treat it as a warning. You can obtain these exceptions by calling the getSoftFailExceptions method: List<CertPathValidatorException> exceptions = prc.getSoftFailExceptions(); Another new option called ONLY_END_ENTITY allows you to only check the revocation status of the end-entity certificate. This can improve performance, but you should be careful using this option, as the revocation status of CA certificates will not be checked. To set more than one option, simply specify them together, for example: prc.setOptions(Enum.setOf(Option.SOFT_FAIL, Option.ONLY_END_ENTITY)); By default, PKIXRevocationChecker will try to check the revocation status of a certificate using OCSP first, and then CRLs as a fallback. However, you can switch the order using the PREFER_CRLS option, or disable the fallback altogether using the NO_FALLBACK option. For example, here is how you would only use CRLs to check the revocation status: prc.setOptions(Enum.setOf(Option.PREFER_CRLS, Option.NO_FALLBACK)); There are also a number of other useful methods which allow you to specify various options such as the OCSP responder URI, the trusted OCSP responder certificate, and OCSP request extensions. However, one of the most useful features is the ability to specify a cached OCSP response with the setOCSPResponse method. This can be quite useful if the OCSPResponse has already been obtained, for example in a protocol that uses OCSP stapling. After you have set all of your preferred options, you must add the PKIXRevocationChecker to your PKIXParameters object as one of your custom CertPathCheckers before you validate the certificate chain, as follows: PKIXParameters params = new PKIXParameters(keystore); params.addCertPathChecker(prc); CertPathValidatorResult result = cpv.validate(path, params); Early access binaries of JDK 8 can be downloaded from http://jdk8.java.net/download.html

    Read the article

  • .NET "must-have" development tools

    - by nzpcmad
    James Avery wrote a classic article a while back entitled Ten Must-Have Tools Every Developer Should Download Now which is a companion to Visual Studio Add-Ins Every Developer Should Download Now and Scott Hanselman has an excellent list on his blog but if you were on a desert island and were only allowed three .NET development tools which ones would you pick? Update: Assuming you already have an IDE like Visual Studio ... Update (5) : Up to 08/01 : The current state of play: Reflector 13 Resharper 9 NUnit + TestDriven.Net 7 Refactor Pro 4 Process Explorer (other Sysinternals) 3 SnippetCompiler 3 CodeRush 3 MSDN Library 2 LinqPad 2 Cruisecontrol.net 2 VMWare 2 RhinoMocks 2 Fiddler 2 PowerShell 2 PowerCommands for VS 2008 1 Sandcastle 1 SQL Profiler 1 Redgate ANTS profiler 11 NCover 1 VisualSVN 1 Rubber Ducky 1 WinMerge 1 NAnt 1 ViEmu 1 AnkhSVN 1 dotTrace Profiler 1 BeyondCompare 1 DPack VS Plugin 1 WCF Trace Viewer (SDK) 1 xUnit.net 1 SourceGear DiffMerge 1 Ghostdoc 1 Expression Studio 1 XAML Pad 1 KaXaml 1 Blender for 3D modeling 1 Snoop a WPF tool 1 DiffMerge 1 DPack 1 NDepend 1 Kodos 1 WatiN 1 HTTPWatch Basic Edition 1 Paint.Net 1 Mole For VS 1 What I find particularly interesting about this is that "NUnit + TestDriven.Net " is right up there in third place which shows the growing emphasis on testing as an integral part of the development process rather than as an adjunct which is simply bolted on. And I'm somewhat perplexed that Codesmith didn't receive a single vote?

    Read the article

  • How can I return json from my WCF rest service (.NET 4), using Json.Net, without it being a string,

    - by Samuel Meacham
    The DataContractJsonSerializer is unable to handle many scenarios that Json.Net handles just fine when properly configured (specifically, cycles). A service method can either return a specific object type (in this case a DTO), in which case the DataContractJsonSerializer will be used, or I can have the method return a string, and do the serialization myself with Json.Net. The problem is that when I return a json string as opposed to an object, the json that is sent to the client is wrapped in quotes. Using DataContractJsonSerializer, returning a specific object type, the response is: {"Message":"Hello World"} Using Json.Net to return a json string, the response is: "{\"Message\":\"Hello World\"}" I do not want to have to eval() or JSON.parse() the result on the client, which is what I would have to do if the json comes back as a string, wrapped in quotes. I realize that the behavior is correct; it's just not what I want/need. I need the raw json; the behavior when the service method's return type is an object, not a string. So, how can I have my method return an object type, but not use the DataContractJsonSerializer? How can I tell it to use the Json.Net serializer instead? Or, is there someway to directly write to the response stream? So I can just return the raw json myself? Without the wrapping quotes? Here is my contrived example, for reference: [DataContract] public class SimpleMessage { [DataMember] public string Message { get; set; } } [ServiceContract] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)] public class PersonService { // uses DataContractJsonSerializer // returns {"Message":"Hello World"} [WebGet(UriTemplate = "helloObject")] public SimpleMessage SayHelloObject() { return new SimpleMessage("Hello World"); } // uses Json.Net serialization, to return a json string // returns "{\"Message\":\"Hello World\"}" [WebGet(UriTemplate = "helloString")] public string SayHelloString() { SimpleMessage message = new SimpleMessage() { Message = "Hello World" }; string json = JsonConvert.Serialize(message); return json; } // I need a mix of the two. Return an object type, but use the Json.Net serializer. }

    Read the article

  • .NET proxy detection

    - by Ziplin
    I am having an issue with .NET detecting the proxy settings configured through internet explorer. I'm writing a client application that supports proxies, and to test I set up an array of 9 squid servers to support various authentication methods for HTTP and HTTPs. I have a script that updates IE to whichever configuration I choose (which proxy, detection via "Auto", PAC, or hardcode). I have tried the 3 methods below to detect the IE configuration through .NET. On occassion I notice that .NET picks up the wrong set of proxy servers. IE has the correct settings, and if I browse the web with IE, I can see I am hitting the correct servers via wireshark. WebRequest.GetSystemWebProxy().GetProxy(destination); GlobalProxySelection.Select.GetProxy(destination); WebRequest.DefaultWebProxy Here are the following tips I have: My script sets a PAC file on a webserver, and updates the configuration in IE, then clears IE's cache .NET seems to get "stuck" on a certain proxy configuration, and I have to set another configuration for .NET to realize there was a change. Occasionally it seems to pick some random set of servers (I'm sure they're not random, just a set of servers I used once and are in some cached PAC file or something). As in, I will check the proxy for the destination "https://www.secure.com" and I may have IE configured for and thus expect to get "http://squidserver:18" and instead it will return "http://squidserver:28" (port 18 runs NTLM, 28 runs without authentication). All the squid servers work. This does not appear to be an issue on XP, only Vista, 2003, and windows 7. Hardcoding the proxy servers in IE ALWAYS works Time always solves the issue - if I leave the computer for about 20 or 30 minutes and come back, .NET picks up the correct proxy settings, as if a cached PAC script expired.

    Read the article

  • Decoupling the view, presentation and ASP.NET Web Forms

    - by John Leidegren
    I have an ASP.NET Web Forms page which the presenter needs to populate with controls. This interaction is somewhat sensitive to the page-life cycle and I was wondering if there's a trick to it, that I don't know about. I wanna be practical about the whole thing but not compromise testability. Currently I have this: public interface ISomeContract { void InstantiateIn(System.Web.UI.Control container); } This contract has a dependency on System.Web.UI.Control and I need that to be able to do things with the ASP.NET Web Forms programming model. But neither the view nor the presenter may have knowledge about ASP.NET server controls. How do I get around this? How can I work with the ASP.NET Web Forms programming model in my concrete views without taking a System.Web.UI.Control dependency in my contract assemblies? To clarify things a bit, this type of interface is all about UI composition (using MEF). It's known through-out the framework but it's really only called from within the concrete view. The concrete view is still the only thing that knows about ASP.NET Web Forms. However those public methods that say InstantiateIn(System.Web.UI.Control) exists in my contract assemblies and that implies a dependency on ASP.NET Web Forms. I've been thinking about some double dispatch mechanism or even visitor pattern to try and work around this.

    Read the article

  • Visual Studio and .NET programming

    - by Vit
    Hi, I just want to ask wheather I am right or not about .NET. So, .NET is new framework that enables you to easily implement new and old windows functions. It is similiar to java in the way that its also compiled into "bytecode", but its name is Common Language Infrastructure, or CLI. This language is interpreted by .NET Framework, so code generated by programming using .NET cannot be executed directly by CPU. Now, few languages can be compiled to CLI. First, it was Microsoft-developed C#, than J#, C++ others. I suspect that this is in general right, at least I hope I understand it right. But, what I am still missing is, can you write to machine code compiled code in C#? And, if using Visual Studio 2005, when I select Win32 project, it is compiled into machine code, so only thing you need to run this apps are windows dynamic-link libraries, since static libraries code is implemented into app durink linking phase. And those dynamic-link libraries are implemented in every windows installation, or provided by DirectX installations. But when I select CLR in Visual Studio 2005, than app is compiled into CLI code, and it first executes .NET framework, and than .NET framework executes that program, since its not in machine code. So, I am right? I ask becouse you can read these infos on the internet, but I have noone to tell me wheather I understand it right or not. Thanks.

    Read the article

  • Setting CPU target to x86 on .NET 2.0 project adds .NET 3.5 dependencies.

    - by AngryHacker
    I have a project in VS2008 that targets .NET 2.0 framework. It was original set to build for AnyCPU. I changed it to x86 and for whatever reason, VS adds the following lines to .csproj: <ItemGroup> <BootstrapperPackage Include="Microsoft.Net.Client.3.5"> <Visible>False</Visible> <ProductName>.NET Framework Client Profile</ProductName> <Install>false</Install> </BootstrapperPackage> ... ... <BootstrapperPackage Include="Microsoft.Net.Framework.3.5.SP1"> <Visible>False</Visible> <ProductName>.NET Framework 3.5 SP1</ProductName> <Install>false</Install> </BootstrapperPackage> </ItemGroup> Can someone explain as to why this is being added and whether I can safely remove it, as I still have to target the .NET 2.0 framework. Thanks.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >