Search Results

Search found 25180 results on 1008 pages for 'post processing'.

Page 22/1008 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Why is .NET Post different from CURL? broken?

    - by ironnailpiercethesky
    I dont understand this. I ran this code below and the result json string was the link is expired (meaning invalid). However the curl code does the exact same thing and works. I either get the expected string with the url or it says i need to wait (for a few seconds to 1 minute). Why? whats the difference between the two? It looks very F%^&*ed up that it is behaving differently (its been causing me HOURS of problems). NOTE: the only cookie required by the site is SID (tested). It holds your session id. The first post activates it and the 2nd command checks the status with the returning json string. Feel free to set the CookieContainer to only use SID if you like. WARNING: you may want to change SID to a different value so other people arent activating it. Your may want to run the 2nd url to ensure the session id is not used and says expired/invalid before you start. additional note: with curl or in your browser if you do the POST command you can stick the sid in .NET cookie container and the 2nd command will work. But doing the first command (the POST data) will not work. This post function i have used for many other sites that require post and so far it has worked. Obviously checking the Method is a big deal and i see it is indeed POST when doing the first command. static void Main(string[] args) { var cookie = new CookieContainer(); PostData("http://uploading.com/files/get/37e36ed8/", "action=second_page&file_id=9134949&code=37e36ed8", cookie); Thread.Sleep(4000); var res = PostData("http://uploading.com/files/get/?JsHttpRequest=12719362769080-xml&action=get_link&file_id=9134949&code=37e36ed8&pass=undefined", null/*this makes it GET*/, cookie); Console.WriteLine(res); /* curl -b "SID=37468830" -A "DUMMY_User_Aggent" -d "action=second_page&file_id=9134949&code=37e36ed8" "http://uploading.com/files/get/37e36ed8/" curl -b "SID=37468830" -A "DUMMY_User_Aggent" "http://uploading.com/files/get/?JsHttpRequest=12719362769080-xml&action=get_link&file_id=9134949&code=37e36ed8&pass=undefined" */ }

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Unable to install Wordpress on Amazon's EC2 instance due to missing php-mbstring

    - by alexus
    I've created a new instance on Amazon's EC2 and I'm trying in wordpress and it's failing due to php-mbstring: # yum install wordpress Loaded plugins: amazon-id, rhui-lb Resolving Dependencies --> Running transaction check ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-simplepie >= 1.3.1 for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-gd for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-PHPMailer for package: wordpress-3.9.1-1.el7.noarch --> Running transaction check ---> Package php-PHPMailer.noarch 0:5.2.6-1.el7 will be installed --> Processing Dependency: php-mbstring >= 5.1.0 for package: php-PHPMailer-5.2.6-1.el7.noarch ---> Package php-gd.x86_64 0:5.4.16-21.el7 will be installed --> Processing Dependency: libpng15.so.15(PNG15_0)(64bit) for package: php-gd-5.4.16-21.el7.x86_64 --> Processing Dependency: libt1.so.5()(64bit) for package: php-gd-5.4.16-21.el7.x86_64 --> Processing Dependency: libpng15.so.15()(64bit) for package: php-gd-5.4.16-21.el7.x86_64 --> Processing Dependency: libXpm.so.4()(64bit) for package: php-gd-5.4.16-21.el7.x86_64 --> Processing Dependency: libX11.so.6()(64bit) for package: php-gd-5.4.16-21.el7.x86_64 ---> Package php-simplepie.noarch 0:1.3.1-4.el7 will be installed --> Processing Dependency: php-mbstring for package: php-simplepie-1.3.1-4.el7.noarch --> Processing Dependency: php-IDNA_Convert for package: php-simplepie-1.3.1-4.el7.noarch ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Running transaction check ---> Package libX11.x86_64 0:1.6.0-2.1.el7 will be installed --> Processing Dependency: libX11-common = 1.6.0-2.1.el7 for package: libX11-1.6.0-2.1.el7.x86_64 --> Processing Dependency: libxcb.so.1()(64bit) for package: libX11-1.6.0-2.1.el7.x86_64 ---> Package libXpm.x86_64 0:3.5.10-5.1.el7 will be installed ---> Package libpng.x86_64 2:1.5.13-5.el7 will be installed ---> Package php-IDNA_Convert.noarch 0:0.8.0-2.el7 will be installed --> Processing Dependency: php-mbstring for package: php-IDNA_Convert-0.8.0-2.el7.noarch ---> Package php-PHPMailer.noarch 0:5.2.6-1.el7 will be installed --> Processing Dependency: php-mbstring >= 5.1.0 for package: php-PHPMailer-5.2.6-1.el7.noarch ---> Package php-simplepie.noarch 0:1.3.1-4.el7 will be installed --> Processing Dependency: php-mbstring for package: php-simplepie-1.3.1-4.el7.noarch ---> Package t1lib.x86_64 0:5.1.2-14.el7 will be installed ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Running transaction check ---> Package libX11-common.noarch 0:1.6.0-2.1.el7 will be installed ---> Package libxcb.x86_64 0:1.9-5.el7 will be installed --> Processing Dependency: libXau.so.6()(64bit) for package: libxcb-1.9-5.el7.x86_64 ---> Package php-IDNA_Convert.noarch 0:0.8.0-2.el7 will be installed --> Processing Dependency: php-mbstring for package: php-IDNA_Convert-0.8.0-2.el7.noarch ---> Package php-PHPMailer.noarch 0:5.2.6-1.el7 will be installed --> Processing Dependency: php-mbstring >= 5.1.0 for package: php-PHPMailer-5.2.6-1.el7.noarch ---> Package php-simplepie.noarch 0:1.3.1-4.el7 will be installed --> Processing Dependency: php-mbstring for package: php-simplepie-1.3.1-4.el7.noarch ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Running transaction check ---> Package libXau.x86_64 0:1.0.8-2.1.el7 will be installed ---> Package php-IDNA_Convert.noarch 0:0.8.0-2.el7 will be installed --> Processing Dependency: php-mbstring for package: php-IDNA_Convert-0.8.0-2.el7.noarch ---> Package php-PHPMailer.noarch 0:5.2.6-1.el7 will be installed --> Processing Dependency: php-mbstring >= 5.1.0 for package: php-PHPMailer-5.2.6-1.el7.noarch ---> Package php-simplepie.noarch 0:1.3.1-4.el7 will be installed --> Processing Dependency: php-mbstring for package: php-simplepie-1.3.1-4.el7.noarch ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Finished Dependency Resolution Error: Package: php-PHPMailer-5.2.6-1.el7.noarch (epel) Requires: php-mbstring >= 5.1.0 Error: Package: php-IDNA_Convert-0.8.0-2.el7.noarch (epel) Requires: php-mbstring Error: Package: wordpress-3.9.1-1.el7.noarch (epel) Requires: php-mbstring Error: Package: php-simplepie-1.3.1-4.el7.noarch (epel) Requires: php-mbstring Error: Package: wordpress-3.9.1-1.el7.noarch (epel) Requires: php-enchant You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest # I'm using RHEL7: # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.0 (Maipo) # yum repolist Loaded plugins: amazon-id, rhui-lb repo id repo name status epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 4,325 rhui-REGION-client-config-server-7/x86_64 Red Hat Update Infrastructure 2.0 Client Configuration Server 7 1 rhui-REGION-rhel-server-releases/7Server/x86_64 Red Hat Enterprise Linux Server 7 (RPMs) 4,447 repolist: 8,773 # a while back and another environment I had to run following command first in order to get access to php-mbstring: rhn-channel --add --channel=rhel-x86_64-server-optional-6 How do you do that in Amazon EC2?:

    Read the article

  • How to serialize data from processing.js to rails application ?

    - by railscoder
    Hi I am creating a simple canvas using processing.js , how to pass values from rails application to Processing.js void drawBox(int bx, int by, int bs, int bs){ strokeWeight(3); stroke(50,50,50); // Test if the cursor is over the box if (mouseX > bx-bs && mouseX < bx+bs && mouseY > by-bs && mouseY < by+bs) { bover = true; if(!locked) { fill(181,213,255); } } else { fill(255); bover = false; } fill(192); stroke(64); roundRect(bx, by,80,30,10,10); // put in text if (!isRight) { text("Box Value", x-size+5, y-5); //Here i need to pass value from my controller } else { text("Box Value", x+5, y-5); //Here i need to pass value from my controller } } Instead of static string "Box Value" , I need to pass the value from the ex @post.name through ajax

    Read the article

  • Linux Cups raspberry pi offload processing to server

    - by jaredmsaul
    I am interested in setting up a raspberry pi as the local end of a printing solution. In my testing the pi chokes on acting as a complete cups based print server. It seems a little underpowered for some of the ghostscipt processing and other filtering that occurs-- particularly on larger or complex documents the processing time can be 5 or more minutes. My question is can the processing be largely done elsewhere and the prepared end product of the processing chain be fed to the pi for output on the connected printer? So in this scenario any arbitrary document (html, pdf, text) is initially 'printed' on a relatively powerful machine but the output is stored in a file. This file is then grabbed by the pi and with all the heavy work out of the way easily printed using cups. I know files can be pushed through cups in raw mode but I am fuzzy on the pros and cons and the applicability in what I describe. I have tested this with pdftops creating a ps file then feeding that raw to cups and I think it works but it seems like there may be a cleaner solution. This scenario would ideally work for any number of printer types.

    Read the article

  • IIS 6 405 error when POSTing through I-Frame

    - by Angelo R.
    Before I begin, I must add that I am more of a programmer, so please be patient :p I have a 2003 server running IIS 6. I am trying to create a Facebook application that accesses a url on my server through an I-Frame. However, Facebook is trying to send some data via POST to my page. I assumed it wouldn't be a problem since the page is .html, but I keep receiving 405 errors (Incorrect Verbs) when trying to access it. Since these are generated by IIS, I had hoped there would be a way for me to allow html files to accept POST. However, after a lot of Googling, it seems like that isn't possible, so instead I figure I can convert the page to an aspx one, and that should work... however I am running in to the same issue. I thought that simply adding POST to the .aspx entry in Application Extension Mapping would work, but it still doesn't. Does anyone know what the problem could potentially be?

    Read the article

  • Wordpress Queue like Tumblr?

    - by Michael Hopkins
    Hi. Is there a way to give Wordpress the queue functionality that Tumblr has? Tumblr's queue, for those who don't know, is a way to space posts out without assigning specific post dates. For example, a Tumblr queue might be set to post every four hours between 9am and 5pm. Tumblr would drop the front post in the queue at 9am, 1pm and 5pm every day. Posts are added to the queue by clicking "add to queue" instead of "publish." It's quite simple. How can this feature be added to Wordpress?

    Read the article

  • Best way to do text processing in linux/mac ?

    - by euphoria83
    I generally need to do a fair amount of text processing for my research, such as removing the last token from all lines, extracting the first 2 tokens from each line, splitting each line into tokens, etc. What is the best way to perform this ? Should I learn Perl for this? Or should I learn some kind of shell commands? The main concern is speed. If I need to write long code for such stuff, it defeats the purpose.

    Read the article

  • Using jQuery to POST Form Data to an ASP.NET ASMX AJAX Web Service

    - by Rick Strahl
    The other day I got a question about how to call an ASP.NET ASMX Web Service or PageMethods with the POST data from a Web Form (or any HTML form for that matter). The idea is that you should be able to call an endpoint URL, send it regular urlencoded POST data and then use Request.Form[] to retrieve the posted data as needed. My first reaction was that you can’t do it, because ASP.NET ASMX AJAX services (as well as Page Methods and WCF REST AJAX Services) require that the content POSTed to the server is posted as JSON and sent with an application/json or application/x-javascript content type. IOW, you can’t directly call an ASP.NET AJAX service with regular urlencoded data. Note that there are other ways to accomplish this. You can use ASP.NET MVC and a custom route, an HTTP Handler or separate ASPX page, or even a WCF REST service that’s configured to use non-JSON inputs. However if you want to use an ASP.NET AJAX service (or Page Methods) with a little bit of setup work it’s actually quite easy to capture all the form variables on the client and ship them up to the server. The basic steps needed to make this happen are: Capture form variables into an array on the client with jQuery’s .serializeArray() function Use $.ajax() or my ServiceProxy class to make an AJAX call to the server to send this array On the server create a custom type that matches the .serializeArray() name/value structure Create extension methods on NameValue[] to easily extract form variables Create a [WebMethod] that accepts this name/value type as an array (NameValue[]) This seems like a lot of work but realize that steps 3 and 4 are a one time setup step that can be reused in your entire site or multiple applications. Let’s look at a short example that looks like this as a base form of fields to ship to the server: The HTML for this form looks something like this: <div id="divMessage" class="errordisplay" style="display: none"> </div> <div> <div class="label">Name:</div> <div><asp:TextBox runat="server" ID="txtName" /></div> </div> <div> <div class="label">Company:</div> <div><asp:TextBox runat="server" ID="txtCompany"/></div> </div> <div> <div class="label" ></div> <div> <asp:DropDownList runat="server" ID="lstAttending"> <asp:ListItem Text="Attending" Value="Attending"/> <asp:ListItem Text="Not Attending" Value="NotAttending" /> <asp:ListItem Text="Maybe Attending" Value="MaybeAttending" /> <asp:ListItem Text="Not Sure Yet" Value="NotSureYet" /> </asp:DropDownList> </div> </div> <div> <div class="label">Special Needs:<br /> <small>(check all that apply)</small></div> <div> <asp:ListBox runat="server" ID="lstSpecialNeeds" SelectionMode="Multiple"> <asp:ListItem Text="Vegitarian" Value="Vegitarian" /> <asp:ListItem Text="Vegan" Value="Vegan" /> <asp:ListItem Text="Kosher" Value="Kosher" /> <asp:ListItem Text="Special Access" Value="SpecialAccess" /> <asp:ListItem Text="No Binder" Value="NoBinder" /> </asp:ListBox> </div> </div> <div> <div class="label"></div> <div> <asp:CheckBox ID="chkAdditionalGuests" Text="Additional Guests" runat="server" /> </div> </div> <hr /> <input type="button" id="btnSubmit" value="Send Registration" /> The form includes a few different kinds of form fields including a multi-selection listbox to demonstrate retrieving multiple values. Setting up the Server Side [WebMethod] The [WebMethod] on the server we’re going to call is going to be very simple and just capture the content of these values and echo then back as a formatted HTML string. Obviously this is overly simplistic but it serves to demonstrate the simple point of capturing the POST data on the server in an AJAX callback. public class PageMethodsService : System.Web.Services.WebService { [WebMethod] public string SendRegistration(NameValue[] formVars) { StringBuilder sb = new StringBuilder(); sb.AppendFormat("Thank you {0}, <br/><br/>", HttpUtility.HtmlEncode(formVars.Form("txtName"))); sb.AppendLine("You've entered the following: <hr/>"); foreach (NameValue nv in formVars) { // strip out ASP.NET form vars like _ViewState/_EventValidation if (!nv.name.StartsWith("__")) { if (nv.name.StartsWith("txt") || nv.name.StartsWith("lst") || nv.name.StartsWith("chk")) sb.Append(nv.name.Substring(3)); else sb.Append(nv.name); sb.AppendLine(": " + HttpUtility.HtmlEncode(nv.value) + "<br/>"); } } sb.AppendLine("<hr/>"); string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs == null) sb.AppendLine("No Special Needs"); else { sb.AppendLine("Special Needs: <br/>"); foreach (string need in needs) { sb.AppendLine("&nbsp;&nbsp;" + need + "<br/>"); } } return sb.ToString(); } } The key feature of this method is that it receives a custom type called NameValue[] which is an array of NameValue objects that map the structure that the jQuery .serializeArray() function generates. There are two custom types involved in this: The actual NameValue type and a NameValueExtensions class that defines a couple of extension methods for the NameValue[] array type to allow for single (.Form()) and multiple (.FormMultiple()) value retrieval by name. The NameValue class is as simple as this and simply maps the structure of the array elements of .serializeArray(): public class NameValue { public string name { get; set; } public string value { get; set; } } The extension method class defines the .Form() and .FormMultiple() methods to allow easy retrieval of form variables from the returned array: /// <summary> /// Simple NameValue class that maps name and value /// properties that can be used with jQuery's /// $.serializeArray() function and JSON requests /// </summary> public static class NameValueExtensionMethods { /// <summary> /// Retrieves a single form variable from the list of /// form variables stored /// </summary> /// <param name="formVars"></param> /// <param name="name">formvar to retrieve</param> /// <returns>value or string.Empty if not found</returns> public static string Form(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).FirstOrDefault(); if (matches != null) return matches.value; return string.Empty; } /// <summary> /// Retrieves multiple selection form variables from the list of /// form variables stored. /// </summary> /// <param name="formVars"></param> /// <param name="name">The name of the form var to retrieve</param> /// <returns>values as string[] or null if no match is found</returns> public static string[] FormMultiple(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).Select(nv => nv.value).ToArray(); if (matches.Length == 0) return null; return matches; } } Using these extension methods it’s easy to retrieve individual values from the array: string name = formVars.Form("txtName"); or multiple values: string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs != null) { // do something with matches } Using these functions in the SendRegistration method it’s easy to retrieve a few form variables directly (txtName and the multiple selections of lstSpecialNeeds) or to iterate over the whole list of values. Of course this is an overly simple example – in typical app you’d probably want to validate the input data and save it to the database and then return some sort of confirmation or possibly an updated data list back to the client. Since this is a full AJAX service callback realize that you don’t have to return simple string values – you can return any of the supported result types (which are most serializable types) including complex hierarchical objects and arrays that make sense to your client code. POSTing Form Variables from the Client to the AJAX Service To call the AJAX service method on the client is straight forward and requires only use of little native jQuery plus JSON serialization functionality. To start add jQuery and the json2.js library to your page: <script src="Scripts/jquery.min.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> json2.js can be found here (be sure to remove the first line from the file): http://www.json.org/json2.js It’s required to handle JSON serialization for those browsers that don’t support it natively. With those script references in the document let’s hookup the button click handler and call the service: $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); $.ajax({ url: "PageMethodsService.asmx/SendRegistration", type: "POST", contentType: "application/json", data: JSON.stringify({ formVars: arForm }), dataType: "json", success: function (result) { var jEl = $("#divMessage"); jEl.html(result.d).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, error: function (xhr, status) { alert("An error occurred: " + status); } }); } The key feature in this code is the $("#form1").serializeArray();  call which serializes all the form fields of form1 into an array. Each form var is represented as an object with a name/value property. This array is then serialized into JSON with: JSON.stringify({ formVars: arForm }) The format for the parameter list in AJAX service calls is an object with one property for each parameter of the method. In this case its a single parameter called formVars and we’re assigning the array of form variables to it. The URL to call on the server is the name of the Service (or ASPX Page for Page Methods) plus the name of the method to call. On return the success callback receives the result from the AJAX callback which in this case is the formatted string which is simply assigned to an element in the form and displayed. Remember the result type is whatever the method returns – it doesn’t have to be a string. Note that ASP.NET AJAX and WCF REST return JSON data as a wrapped object so the result has a ‘d’ property that holds the actual response: jEl.html(result.d).fadeIn(1000); Slightly simpler: Using ServiceProxy.js If you want things slightly cleaner you can use the ServiceProxy.js class I’ve mentioned here before. The ServiceProxy class handles a few things for calling ASP.NET and WCF services more cleanly: Automatic JSON encoding Automatic fix up of ‘d’ wrapper property Automatic Date conversion on the client Simplified error handling Reusable and abstracted To add the service proxy add: <script src="Scripts/ServiceProxy.js" type="text/javascript"></script> and then change the code to this slightly simpler version: <script type="text/javascript"> proxy = new ServiceProxy("PageMethodsService.asmx/"); $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); proxy.invoke("SendRegistration", { formVars: arForm }, function (result) { var jEl = $("#divMessage"); jEl.html(result).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, function (error) { alert(error.message); } ); } The code is not very different but it makes the call as simple as specifying the method to call, the parameters to pass and the actions to take on success and error. No more remembering which content type and data types to use and manually serializing to JSON. This code also removes the “d” property processing in the response and provides more consistent error handling in that the call always returns an error object regardless of a server error or a communication error unlike the native $.ajax() call. Either approach works and both are pretty easy. The ServiceProxy really pays off if you use lots of service calls and especially if you need to deal with date values returned from the server  on the client. Summary Making Web Service calls and getting POST data to the server is not always the best option – ASP.NET and WCF AJAX services are meant to work with data in objects. However, in some situations it’s simply easier to POST all the captured form data to the server instead of mapping all properties from the input fields to some sort of message object first. For this approach the above POST mechanism is useful as it puts the parsing of the data on the server and leaves the client code lean and mean. It’s even easy to build a custom model binder on the server that can map the array values to properties on an object generically with some relatively simple Reflection code and without having to manually map form vars to properties and do string conversions. Keep in mind though that other approaches also abound. ASP.NET MVC makes it pretty easy to create custom routes to data and the built in model binder makes it very easy to deal with inbound form POST data in its original urlencoded format. The West Wind West Wind Web Toolkit also includes functionality for AJAX callbacks using plain POST values. All that’s needed is a Method parameter to query/form value to specify the method to be called on the server. After that the content type is completely optional and up to the consumer. It’d be nice if the ASP.NET AJAX Service and WCF AJAX Services weren’t so tightly bound to the content type so that you could more easily create open access service endpoints that can take advantage of urlencoded data that is everywhere in existing pages. It would make it much easier to create basic REST endpoints without complicated service configuration. Ah one can dream! In the meantime I hope this article has given you some ideas on how you can transfer POST data from the client to the server using JSON – it might be useful in other scenarios beyond ASP.NET AJAX services as well. Additional Resources ServiceProxy.js A small JavaScript library that wraps $.ajax() to call ASP.NET AJAX and WCF AJAX Services. Includes date parsing extensions to the JSON object, a global dataFilter for processing dates on all jQuery JSON requests, provides cleanup for the .NET wrapped message format and handles errors in a consistent fashion. Making jQuery Calls to WCF/ASMX with a ServiceProxy Client More information on calling ASMX and WCF AJAX services with jQuery and some more background on ServiceProxy.js. Note the implementation has slightly changed since the article was written. ww.jquery.js The West Wind West Wind Web Toolkit also includes ServiceProxy.js in the West Wind jQuery extension library. This version is slightly different and includes embedded json encoding/decoding based on json2.js.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  AJAX  

    Read the article

  • The Missing Post

    - by Joe Mayo
    It’s somewhat of a mystery how the writing process can conjure up results that weren’t initially intended. Case in point is the fact that another post was planned to be in place of this one, but it never made the light of day.  This particular post started off as an introduction to a technology I had just learned, used, and wanted to share the experience with others.  The beginning was fun and demonstrated how easy it was to get started.  One of the things I’ve been pondering over time is that the Web is filled with introductions to new technologies and quick first looks, so I set out to add more depth, share lessons learned, and generally help you avoid the problems I encountered along the way; problems being a key theme of why you aren’t reading that post at this very minute.  Problems that curiously came from nowhere to thwart my good intentions. Success was sweet when using the tool for the prototypical demo scenario. The thing is, I intended the tool to accomplish a real task.  Having embarked on the path toward getting the job done, glitches began creeping into the process.  Realizing that this was all a bit new, I had patience and found a suitable work-around, but this was to be short lived. As in marching ants to a freshly laid out picnic, the problems kept coming until I had to get up and walk away.  Not to be outdone, sheer will and brute force manual intervention led to mission accomplishment.  Though I kept a positive outlook and was pleased at the final result, the process of using the tool had somewhat soured. Regardless of a less than stellar experience with the tool, I have a great deal of respect for the company that produced it and the people who built it. Perhaps I empathize for what they might feel after reading a post that details such deficiencies in their product.  Sure, if you’re in this business, you’ve got to have a thick skin; brush it off, fix the problem, and move on to greatness. But, today I feel like they’re people and are probably already aware of any issues I would seemingly reveal.  Anyone who builds a product or provides a service takes a lot of pride in what they do.  Sometimes they screw up and if their worth a dime, they make it up. I think that will happen in this case and there’s no reason why I should post information that has the potential to sound more negative than helpful.  While no one would ever notice or care either way, I’m posting something that won’t harm. Joe

    Read the article

  • Windows Azure: need to know the data processing time

    - by veda
    I have stored some files in the form of blobs on azure and I have written an application that would access these blobs. When I host this application as a web role on azure, it works perfectly and I am happy with that. But now, I wanted to know “what is the query time taken to access each blob file?” I was searching for this through the Microsoft Azure Storage SLA and I found that for GetBlob request type, the maximum processing time should be within the product of 2 seconds multiplied by the number of MBs transferred in processing the request. I am still unclear. What is the actual processing time of my data query? How can I measure it? Can I be able to speed up the processing time? I can understand that the processing time depends on internet speed, location of the data center where my data is being stored, and location of data center where my application is being hosted. But still, will I be able to speed up my query?

    Read the article

  • SiriProxy Harnesses Siri’s Voice Processing to Control Thermostats and More

    - by Jason Fitzpatrick
    iOS: This clever hack taps into the Siri voice agent in iPhone 4S units and allows a proxy service to execute commands outside the normal range of Siri’s behavior–like adjusting the thermostat. It’s a highly experimental hack but it showcases the great potential for Siri-based interaction with a wide range of services and network devices. In the above video Apple enthusiast Plamoni demonstrates how, using SiriProxy, he can check and control his home thermostat. Watch the video the see it in action and, if you feel like riding the edge of experimental and unapproved iPhone antics, you can hit up the link below for the source code and additional documentation. SiriProxy [via ExtremeTech] HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review

    Read the article

  • Reading and conditionally updating N rows, where N > 100,000 for DNA Sequence processing

    - by makerofthings7
    I have a proof of concept application that uses Azure tables to associate DNA sequences to "something". Table 1 is the master table. It uniquely lists every DNA sequence. The PK is a load balanced hash of the RK. The RK is the unique encoded value of the DNA sequence. Additional tables are created per subject. Each subject has a list of N DNA sequences that have one reference in the Master table, where N is 100,000. It is possible for many tables to reference the same DNA sequence, but in this case only one entry will be present in the Master table. My Azure dilemma: I need to lock the reference in the Master table as I work with the data. I need to handle timeouts, and prevent other threads from overwriting my data as one C# thread is working with the information. Other threads need to realise that this is locked, and move onto other unlocked records and do the work. Ideally I'd like to get some progress report of how my computation is going, and have the option to cancel the process (and unwind the locks). Question What is the best approach for this? I'm looking at these code snippets for inspiration: http://blogs.msdn.com/b/jimoneil/archive/2010/10/05/azure-home-part-7-asynchronous-table-storage-pagination.aspx http://stackoverflow.com/q/4535740/328397

    Read the article

  • Multi MVC processing vs Single MVC process

    - by lordg
    I've worked fairly extensively with the MVC framework cakephp, however I'm finding that I would rather have my pages driven by the multiple MVC than by just one MVC. My reason is primarily to maintain an a more DRY principle. In CakePHP MVC: you call a URL which calls a single MVC, which then calls the layout. What I want is: you call a URL, it processes a layout, which then calls multiple MVC's per component/block of html on the page. When you compare JavaScript components, AJAX, and server side HTML rendering, it seems the most consistent method for building pages is through blocks of components or HTML views. That way, the view block could be situated either on the server or the client. This is technically my ONLY disagreement with the MVC model. Outside of this, IMHO MVC rocks! My question is: What other RAD frameworks follow the same principles as MVC but are driven rather by the View side of MVC? I've looked at Django and Ruby on Rails, yet they seems to be more Controller driven. Lift/Scala appears to be somewhat of a good fit, but i'm interested to see what others exist.

    Read the article

  • Low coupling processing big quantities of data

    - by vitalik
    Usually I achieve low coupling by creating classes that exchange lists, sets, and maps between them. Now I am developing a batch application and I can't put all the data inside a data structure because there isn't enough memory. I have to read and process one chunk of data and then going to the next one. So having low coupling is much more difficult because I have to check somewhere if there is still data to read, etc. What I am using now is: Source - Process - Persist The classes that process have to ask to the Source classes if there are more rows to read. What are the best practices and or useful patterns in such situations? I hope I am explaining myself, if not tell me.

    Read the article

  • Developing an analytics's system processing large amounts of data - where to start

    - by Ryan
    Imagine you're writing some sort of Web Analytics system - you're recording raw page hits along with some extra things like tagging cookies etc and then producing stats such as Which pages got most traffic over a time period Which referers sent most traffic Goals completed (goal being a view of a particular page) And more advanced things like which referers sent the most number of vistors who later hit a goal. The naieve way of approaching this would be to throw it in a relational database and run queries over it - but that won't scale. You could pre-calculate everything (have a queue of incoming 'hits' and use to update report tables) - but what if you later change a goal - how could you efficiently re-calculate just the data that would be effected. Obviously this has been done before ;) so any tips on where to start, methods & examples, architecture, technologies etc.

    Read the article

  • signal processing libraries

    - by khinester
    Are there any open source libraries/projects which work in a similar way to http://www.tagattitude.fr/en/products/technology? I am trying to understand the process. At first I thought this could work like when you send a fax to a fax machine. It is basically using the mobile phone’s microphone as a captor and its audio channel as a transporter. Are there any libraries for generating the signal and then being able to decode it?

    Read the article

  • Processing Binary Data in SOA Suite 11g

    - by Ramkumar Menon
    SOA Suite 11g provides a variety of ways to exchange binary data amongst applications and endpoints. The illustration below is a bird's-eye view of all the features in SOA Suite to facilitate such exchanges. Handling Binary data in SOA Suite 11g Composites Samples and Step-by-Step Tutorials A few step-by-step tutorials have been uploaded to java.net that illustrate key concepts related to Binary content handling within SOA composites. Each sample consists of a fully built composite project that can be deployed and tested, together with a Readme doc with screenshots to build the project from scratch. Binary Content Handling within File Adapter Samples [Opaque, Streaming, Attachments] SOAP with Attachments [SwA] Sample MTOM Sample Mediator Pass-through for attachments Sample For detailed information on binary content and large document handling within SOA Suite, refer to Chapter 42 of the SOA Suite Developer's Guide. Handling Binary data in Oracle B2B The following diagram illustrates how Oracle B2B facilitates exchange of binary documents between SOA Suite and Trading Partners.

    Read the article

  • Complex Event Processing and SQL in London next week

    - by simonsabin
    Don’t forget that we have the Stream Insight team coming to London and will be presenting at a SQL Social event on the 9th June. Stream Insight is one of the exciting new features in SQL Server 2008 R2. There are numerous uses of Stream Insight one being Algorithmic Trading an exciting topic in the banking sector. For details of what Stream Insight is go to the teams blog http://blogs.msdn.com/streaminsight/archive/2010/04/22/rtm.aspx and follow some of the links. For more details of the SQL Social...(read more)

    Read the article

  • Interconnect nodes in a Java distributed infrastructure for tweet processing

    - by David Moreno García
    I'm working in a new version of an old project that I used to download and process user statuses from Twitter. The main problem of that project was its infrastructure. I used multiple instances of a java application (trackers) to download from Twitter given an specific task (basically terms to search for), connected with a central node (a web application) that had to process all tweets once per day and generate a new task for each trackers once each 15 minutes. The central node also had to monitor all trackers and enable/disable them under user petition. This, as I said, was too slow because I had multiple bottlenecks, so in this new version I want to improve the infrastructure and isolate all functionalities in specific nodes. I also need a good notification system to receive notifications for any node. So, in the next diagram I show the components that I'll need in this new version: As you can see, there are more nodes. Here are some notes about them: Dashboard: Controls trackers statuses and send a single task to each of them (under user request). The trackers will use this task until replaced with a new one (if done, not each 15 minutes like before). Search engine: I need to store all the tweets. They are firstly stored in a local database for each tracker but after that I'm thinking on using something like Elasticsearch to be able to do fast searches. Tweet processor: Just and isolated component with its own database (maybe something like the search engine to have fast access to info generated by the module). In the future more could be added. Application UI: A web application with a shared database with the Dashboard (mainly to store users information and preferences). Indeed, both could be merged into a single web. The main difference with the previous version of the project is that now they will be isolated and they will only show information and send requests. I will not do any heavy task in them (like process tweets as I did before). So, having this components, my main headache is how to structure all to not have to rewrite a lot of code every time I need to access any new data. Another headache is how can I interconnect nodes. I could use sockets but that is a pain in the ass. Maybe a REST layer? And finally, if all the nodes are isolated, how could I generate notifications for each user which info is only in the database used by the Application UI? I'm programming this using Java and Spring (at least I used them in the last version) but I have no problems with changing the language if I can take advantage of a tool/library/engine to make my life easier and have a better platform. Any comment will be appreciated.

    Read the article

  • Attribute Key Not found Error while processing the cube in SSAS

    - by sathya
    Attribute Key Not found Error occurs in SSAS due to the following reasons :   Dimensions processed after measure groups Null values / references in Keys Please check for the Null values in Interrelated dimensions. (For Ex : You might have a dimension table employees, employees relates to table sub department, subdepartment relates to table department. If by chance there exists a Null value in the mapping between subdepartment and department you will get this error).

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >