Search Results

Search found 18034 results on 722 pages for 'tutor product features'.

Page 651/722 | < Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >

  • Write file need to optimised for heavy traffic part 2

    - by Clayton Leung
    For anyone interest to see where I come from you can refer to part 1, but it is not necessary. write file need to optimised for heavy traffic Below is a snippet of code I have written to capture some financial tick data from the broker API. The code will run without error. I need to optimize the code, because in peak hours the zf_TickEvent method will be call more than 10000 times a second. I use a memorystream to hold the data until it reaches a certain size, then I output it into a text file. The broker API is only single threaded. void zf_TickEvent(object sender, ZenFire.TickEventArgs e) { outputString = string.Format("{0},{1},{2},{3},{4}\r\n", e.TimeStamp.ToString(timeFmt), e.Product.ToString(), Enum.GetName(typeof(ZenFire.TickType), e.Type), e.Price, e.Volume); fillBuffer(outputString); } public class memoryStreamClass { public static MemoryStream ms = new MemoryStream(); } void fillBuffer(string outputString) { byte[] outputByte = Encoding.ASCII.GetBytes(outputString); memoryStreamClass.ms.Write(outputByte, 0, outputByte.Length); if (memoryStreamClass.ms.Length > 8192) { emptyBuffer(memoryStreamClass.ms); memoryStreamClass.ms.SetLength(0); memoryStreamClass.ms.Position = 0; } } void emptyBuffer(MemoryStream ms) { FileStream outStream = new FileStream("c:\\test.txt", FileMode.Append); ms.WriteTo(outStream); outStream.Flush(); outStream.Close(); } Question: Any suggestion to make this even faster? I will try to vary the buffer length but in terms of code structure, is this (almost) the fastest? When memorystream is filled up and I am emptying it to the file, what would happen to the new data coming in? Do I need to implement a second buffer to hold that data while I am emptying my first buffer? Or is c# smart enough to figure it out? Thanks for any advice

    Read the article

  • Team Foundation Server 2010 and Offline development?

    - by Bobby Ortiz
    Did Microsoft add anything to improve offline development? I'm comparing TFS with Mercurial. Edit #1: Work Environment Details 20 Developers. 1 location. TFS 2005 is already installed, but only being used by 4 developers. Those that use TFS, are only using it for Source Control Others using VSS. :( Many small projects (Over 50 projects active) Project Team size: 1 to 3 Several employees work from home one day a week, but have VPN access There is a group of our devs that have never used TFS that are still on VSS. They are the ones pushing use to jump ship to Mercurial. Mercurial offline features is one reason they prefer it. Another reason is they just associate TFS with VSS regardless of my assertions to the contrary. We do use FogBugz and everyone agrees that it is great! This kind of excited our love for NON Microsoft products that our MUCH lighter. I don't think it is worth it.

    Read the article

  • A question of long-running and disruptive branches

    - by Matt Enright
    We are about to begin prototyping a new application that will share some existing infrastructure assemblies with an existing application, and also involve a significant subset of the existing domain model. Parts of the domain model will likely undergo some serious changes for this new application, and the endgame for all of this, once the new application has been fully specified and is launch-ready is that we would like to re-unify the models of the two applications (as well as share a database, link functionality, etc.), but for the duration of development, prototyping, etc, we will be using a separate database so that we can change things without worrying about impact to development or use of the existing application. Since it is a prototype, there will be a pretty long window during which serious changes or rearchitecturing can occur as product management experiments with different workflows, different customer bases are surveyed, and we try and keep up. We have already made a Subversion branch, so as to not impact concurrent development on the mature application, and are toying with 2 potential ways of moving forward with this: Use the svn branch as the sole mechanism of separation. Make our changes to the existing domain models, and evaluate their impact on the existing application (and make requisite changes to ProjectA) when we have established that our long-running side branch is stable enough for re-entry to trunk. "Fork" the shared code (temporarily): Copy ProjectA.Entities to NewProject.Entities, and treat all of the NewProject code as self-contained. When all of the perturbations around the model have died down and we feel satisfied, manually re-integrate the changes (as granular or sweeping as warranted) back into ProjectA.Entities, updating ProjectA to use the improved models at each step (this can take place either before or after the subversion merge has occurred). The subversion merge will then not handle recombination of any of the heavy changes here. Note: the "fork" method only applies to the code we see significant changes in store for, and whose modification will break ProjectA - shared infrastructure stuff for example, we would just modify in place (on our branch) and let the merge sort out. Development is hard, go shopping. Naturally, after not coming to an agreement, we're turning it over to the oracle of power that is SO. Any experience with any of these methods, pain points to watch out for, something new entirely?

    Read the article

  • Framework/service for hosting and managing files

    - by Peteris Caune
    Hi, in a webapp I'm building there is a planned side feature of supporting product illustrations and manuals (so pictures and PDFs), possibly arranged in galleries. As I'd rather not implement from scratch all of the uploading, managing and serving of this content, I'm looking for existing solutions which I could integrate. For example, I'm considering Flickr--users of webapp specify their Flickr username and then use some naming convention to link objects in my webapp with pictures uploaded in their Flickr account. Very little code to write from my side, maybe just some API calls that proxy Flickr APIs, since Flickr would handle picture uploading, organizing them in sets, storing them in cloud and serving them in various sizes etc. One drawback here is that either all of the the pictures are public or I have to deal with interactive Flickr authorization. Also not sure if Flickr would be happy being used in such manner. What other online services or libraries/frameworks I should look at? My webapp is written in Python/Pylons, so Python libraries would be preferred. I'm already using some of Amazon infrastructure, so frontends to Amazon S3 would be cool. For online services, RESTful API would be nice.

    Read the article

  • License For (Mostly) Open Source Website / Service

    - by Ryan Sullivan
    I have an interesting setup and am not sure how to license a website. I know this is not legal advice, and I am not asking for any. There are so many different Open Source Licenses and I do not have the time to read every last one to see which best fits my situation. Really, I am looking for suggestions and a nudge in the right direction. My setup is: I give away for free version of my web service with a clean website interface. The implementation I use in the actual web site is (almost) identical to what I give away. The main service works the exactly the same way, but the website interface to manage features in the service is fairly different. Really the web interfaces have the same exact backend, and the front ends accomplish the same tasks, but the service I offer on my site is very rich and uses a good deal of javascript, where I kept the interface in the version I give away as simple and javascript-less as possible. Mostly so it is easy to understand and integrate into other sites. I am not entirely sure how I should license this. It is more like I develop an open source service but have a separate site built upon it. I like the GPLv3 but I am not sure if I can use it in this case especially since I am making some money off of google ad's on the site and plan on using amazon affiliates as well. Any help would be greatly appreciated. I do want to open it up as much as possible. But I still want to be able to continue with my own implementation. Thanks in advance for any information or help anyone can provide.

    Read the article

  • How to catch this low level MySQL (?) error in PHP/Magento

    - by andnil
    When I'm executing the following statement in Magento with a really large $sku, the execution terminates without any errors thrown what so ever. There are no errors in either Magento's, Apache's or PHP's error logs. Mage::getModel('catalog/product')-loadByAttribute('sku', $sku); Question: How do I catch the error? I've tried to set custom error handlers, and for testing purposes I've also managed to trigger error situations where each of the error handler functions are invoked. But when running the previously mentioned Magento code with a large $sku, none of the error handling functions are executed. error_reporting( -1 ); set_error_handler( array( 'Error', 'captureNormal' ) ); set_exception_handler( array( 'Error', 'captureException' ) ); register_shutdown_function( array( 'Error', 'captureShutdown' ) ); For completeness, this is the $sku I'm passing to loadByAttribute(). (The sku is invalid, but that is not the issue) 1- 9685 0102046|1- 9685 1212100|1- 9685 1212092|1- 9685 1212096|1- 9685 1102100|1- 9685 1102108|1- 9685 1102112|1- 9685 1102092|1- 9685 0102048|1- 9685 0102054|1- 9685 0102056|1- 9685 0102058|1- 9685 1212104|1- 9685 1212108|1- 9685 0212058|1- 9685 0104050|1- 9685 0212050|1- 9685 0212056|1- 9685 0212044|1- 9685 0212048|1- 9685 0212052|1- 9685 0212054|1- 9685 1102104|1- 9685 1102124 Any insight into this matter is much appreciated! Update: Upon further investigation, this is the exact point in the code where execution terminates. when the foreach is executed I guess Magento goes into MySQL world and starts loading up data from the database. \Mage\Catalog\Model\Abstract.php public function loadByAttribute($attribute, $value, $additionalAttributes = '*') { $collection = $this->getResourceCollection() ->addAttributeToSelect($additionalAttributes) ->addAttributeToFilter($attribute, $value) ->setPage(1,1); foreach ($collection as $object) { // <--------------- HERE return $object; } return false; } Note, I'm ONLY interested in finding out how to properly CATCH these kinds of errors, not "fix" the logic. This is so that I can present a proper error message to the user. The example above with the malformed sku is contrived and I have no desire to make my Magento app work with those erroneous skus.

    Read the article

  • Using MPI_Type_Vector and MPI_Gather, in C.

    - by Goloneg
    Hi, I am trying to multiply square matrices in parallele with MPI. I use a MPI_Type_vector to send square submatrixes (arrays of float) to the processes, so they can calculate subproducts. Then, for the next iterations, these submatrices are send to neighbours processes as MPI_Type_contiguous (the whole submatrix is sent). This part is working as expected, and local results are corrects. Then, I use MPI_Gather with the contiguous types to send all local results back to the root process. The problem is, the final matrix is build (obviously, by this method) line by line instead of submatrix by submatrix. I wrote an ugly procedure rearranging the final matrix, but I would like to know if there is a direct way of performing the "inverse" operation of sending MPI_Type_vectors (i.e., sending an array of values and directly arranging it in a subarray form in the receiving array). An example, to try and clarify my long text : A[16] and B[16] are 4x4 matrices to be multiplied ; C[16] will contain the result ; 4 processes are used (Pi with i from 0 to 3) : Pi gets two 2x2 submatrices : subAi[4] and subBi[4] ; their product is stored locally in subCi[4]. For instance, P0 gets : subA0[4] containing A[0], A[1], A[4] and A[5] ; subB0[4] containing B[0], B[1], B[4] and B[5]. After everything is calculed, root process gathers all subCi[4]. Then C[16] contains : [ subC0[0], subC0[1], subC0[2], subC0[3], subC1[0], subC1[1], subC1[2], subC1[3], subC2[0], subC2[1], subC2[2], subC2[3], subC3[0], subC3[1], subC3[2], subC3[3]] and I would like it to be : [ subC0[0], subC0[1], subC1[0], subC1[1], subC0[2], subC0[3], subC1[2], subC1[3], subC2[0], subC2[1], subC3[0], subC3[1], subC2[2], subC2[3], subC3[2], subC3[3]] without further operation. Does someone know a way ? Thanks for your advices.

    Read the article

  • Where to start with the development of first database driven Web App (long question)?

    - by Ryan
    Hi all, I've decided to develop a database driven web app, but I'm not sure where to start. The end goal of the project is three-fold: 1) to learn new technologies and practices, 2) deliver an unsolicited demo to management that would show how information that the company stores as office documents spread across a cumbersome network folder structure can be consolidated and made easier to access and maintain and 3) show my co-workers how Test Drive Development and prototyping via class diagrams can be very useful and reduces future maintenance headaches. I think this ends up being a basic CMS to which I have generated a set of features, see below. 1) Create a database to store the site structure (organized as a tree with a 'project group'-project structure). 2) Pull the site structure from the database and display as a tree using basic front end technologies. 3) Add administrator privileges/tools for modifying the site structure. 4) Auto create required sub pages* when an admin adds a new project. 4.1) There will be several sub pages under each project and the content for each sub page is different. 5) add user privileges for assigning read and write privileges to sub pages. What I would like to do is use Test Driven Development and class diagramming as part of my process for developing this project. My problem; I'm not sure where to start. I have read on Unit Testing and UML, but never used them in practice. Also, having never worked with databases before, how to I incorporate these items into the models and test units? Thank you all in advance for your expertise.

    Read the article

  • How to document an existing small web site (web application), inside and out?

    - by Ricket
    We have a "web application" which has been developed over the past 7 months. The problem is, it was not really documented. The requirements consisted of a small bulleted list from the initial meeting 7 months ago (it's more of a "goals" statement than software requirements). It has collected a number of features which stemmed from small verbal or chat discussions. The developer is leaving very soon. He wrote the entire thing himself and he knows all of the quirks and underlying rules to each page, but nobody else really knows much more than the user interface side of it; which of course is the easy part, as it's made to be intuitive to the user. But if someone needs to repair or add a feature to it, the entire thing is a black box. The code has some minimal comments, and of course the good thing about web applications is that the address bar points you in the right direction towards fixing a problem or upgrading a page. But how should the developer go about documenting this web application? He is a bit lost as far as where to begin. As developers, how do you completely document your web applications for other developers, maintainers, and administrative-level users? What approach do you use, where do you start, do you have a template? An idea of magnitude: it uses PHP, MySQL and jQuery. It has about 20-30 main (frontend) files, along with about 15 included files and a couple folders of some assets. So overall it's a pretty small application. It interfaces with 7 MySQL tables, each one well-named, so I think the database end is pretty self-explanatory. There is a config.inc.php file with definitions of consts like the MySQL user details, some from/to emails, and URLs which PHP uses to insert into emails and pages (relative and absolute paths, basiecally). There is some AJAX via jQuery. Please comment if there is any other information that would help you help me and I will be glad to edit it in.

    Read the article

  • HTML5 Local Storage of audio element source - is it possible?

    - by andrewdotcom
    Hi stackoverflow experts I've been experimenting with the audio and local storage features of html5 of late and have run into something that has me stumped. I'd like to be able to cache or store the source of the audio element locally to enable speedier and offline playback. The problem is I can't see how this is possible with the current implementation. I have tried the following using webkit: Creating a manifest file to set up local caching but the audio file appears not to be a cacheable item maybe due to the way it is stream or something I have also attempted to use javascript to put an audio object into local storage but the size of the mp3 makes this impossible due to memory issues (i think). I have tried to use the data uri and base64 to use the html as a audio transport that can be cached but again the filesize makes this prohibitive. Also the audio element does not seem to like this in webkit (works fine in mozilla) I have tried several methods of putting the data into the local database store. Again suffering the same issues as the other cases. I'd love to hear any other ideas anyone may have as to how I could achieve my goal of offline playback using caching/local storage in webkit.

    Read the article

  • Socket Performance C++ Or C#

    - by modernzombie
    I have to write an application that is essentially a proxy server to handle all HTTP and HTTPS requests from our server (web browsing, etc). I know very little C++ and am very comfortable writing the application features in C#. I have experimented with the proxy from Mentalis (socket proxy) which seems to work fine for small webpages but if I go to large sites like tigerdirect.ca and browse through a couple of layers it is very slow and sometimes requests don't complete and I see broken images and javascript errors. This happens with all of our vendor sites and other content heavy sites. Mentalis uses HTTP 1.0 which I know is not as efficient but should a proxy be that slow? What is an acceptable amount of performance loss from using a proxy? Would HTTP 1.1 make a noticeable difference? Would a C++ proxy be much faster than one in C#? Is the Mentalis code just not efficient? Would I be able to use a premade C++ proxy and import the DLL to C# and still get good performance or would this project call for all C++? Sorry if these are obvious questions but I have not done network programming before.

    Read the article

  • What are your suggestions for best practises for regular data updates in a website database?

    - by bboyle1234
    My shared-hosting asp.net website must automatically run data update routines at regular times of day. Once it has finished running certain update routines, it can run update routines that are dependent on the previous updates. I have done this type of work before, using quite complicated setups. Some features of the framework I created are: A cron job from another server makes a request which starts a data update routine on the main server Each updater is loaded from web.config Each updater overrides a "canRunUpdate" method that determines whether its dependencies have finished updating Each updater overrides a "hasFinishedUpdate" method Each updater overrides a "runUpdate" method Updaters start and run in parallel threads The initial request from the cron job server started each updater in its own thread and then ended. As a result, the threads containing the updaters would be terminated before the updaters were finished. Therefore I had to give the updaters the ability to save partial results and continue the update job next time they are started up. As a result, the cron server had to call the updater many times to ensure the job is done. Sometimes the cron server would continue making update requests long after all the updates were completed. Sometimes the cron server would finish calling the update requests and leave some updates uncompleted. It's not the best system. I'm looking for inspiration. Any ideas please? Thank you :)

    Read the article

  • Temporarily disable vim plugin without relaunching

    - by simont
    I'm using c-support in Vim. One of it's features is the automatic comment expansion. When I'm pasting code into Vim from an external editor, the comments are expanded (which gives me double-comments and messes up the paste - see below for example). I'd like to be able to disable the plugin, paste, then re-enable it, without relaunching Vim. I'm not sure if this is possible. The SO questions here, here and here all describe methods to disable plugins, but they all require me to close Vim, mess with my .vimrc or similar, and relaunch; if I have to close Vim, I might as well cat file1 >> myfile; vim myfile, then shift the lines internally, which will be just as quick. Is it possible to disable a plugin while running vim without relaunching, preferably in a way which allows me to map a hot-key toggle-plugin (so re-sourcing ~/.vimrc is alright; that's mappable to a hotkey [I imagine, haven't tried it yet])? Messed up comments: /* * * Authors: * * A Name * * * * Copyright: * * A Name, 2012 * */ EDIT: It turns out you can :set paste, :set nopaste (which, to quote :help paste, will "avoid unexpected effects [while pasting]". (See the comments). However, I'm still curious whether you can disable/enable a plugin as per the original question, so I shall leave the question open.

    Read the article

  • Basic formatting. sed, or cut, or what?

    - by dsclough
    Very new to this whole Unix thing. I'm currently using korn shell to try and format some lines of text. My input has a couple of lines that look something like this Date/Time :- Monday June 03 00:00:00 EDT 2013 Host Name :- HostNameHere PIDS :- NumbersNLetters Product Name :- ProductName The desired output would be as follows: Date/Time="Monday June 03 00:00:00 EDT 2013" HostName="HostNameHere" PIDS="NumbersNLetters" ProductName="ProductName" So, I need to get rid of any spaces in the leftmost column, and throw everything in the rightmost column between quotations. I've looked at the cut command, and got this far: Cut -f 1,2 -d - Which might produce a result like Date/Time:Monday June 03 00:00:00 EDT 2013, which is close to what I want, but not quite. I wasn't sure if cut could let me add parentheses, and it doesn't look like I can remove spaces that way either. sed seems like it might be closer to the answer, but I wasn't able to find through googling how I might just look for any pattern and not a specific one. I apologize for the incredibly basic question, but reading documentation only gets you so far before your brain starts to ache... If there are any better resources I should be looking at I would be happy to get pointed in the right direction. Thanks!

    Read the article

  • Where to create/keep secret files for license information/trials on Windows/Mac OS X/Linux?

    - by BastiBense
    I'm writing a commercial product which uses a simple registration mechanism and allows the user to use the application for a demo period before purchasing. My application must somewhere store the registration information (if entered) and/or the date of the first launch to calculate if the user is still within the demo/trail period. While I'm pretty much finished with the registration mechanism itself, I now have to find a good way to store the registration information on the user's disk. The most obvious idea would be to store the trial period in the preferences file, but since user tend to delete/tinker with those from time to time, it might be a good idea to keep the registration information in a separate, more hidden file. So here's my question: What is the best place/strategy to keep and create such hidden files on Windows, Mac OS X and Linux? Here is what came to my mind so far: Linux/Mac OS X Most Unix-like systems are rather locked down when it comes to places a user can write files to. In most cases this is only the /tmp directory and the user's home directory. I guess the easiest here is probably to create a file with a dot-prefix to make it less visible, then give it a name that won't make it obvious that it's associated with my application. Windows Probably much like Linux/Mac OS X - more recent Windows versions become more restrictive when it comes to file system permissions. Anyway, I'd like to hear your ideas and thoughs. Even better if you have already implemented something similar in the past. Thanks! Update For me the places for such files is more relevant than the discussion of the question if this way for copy protection is good or bad.

    Read the article

  • Compiling 32-bit Program on VS 2008

    - by gordonwd
    I've been developing on VC++ 2003 on an XP PC but am now on Windows 7 and bought a cheap legal copy of VS 2008 to continue work on the same project. My product has to continue to run on customers' XP systems, so I'm strictly interested in a 32-bit executable. The first issue I ran into was the PRJ0003 error "spawning cl.exe". I had to add the path to this file to the VC++ Directories settings (it appears in both a bin\amd64 and bin\x86_amd64 directory, but I don't think it matters output-wise which I use?). The issue I now have (not counting a tedious cleanup to convert strcpy to strcpy_s, etc.) is that I'm not clear on whether I'm generating a 32-bit or 64-bit exe out of this. My project properties are set to a target of "Win32", so I assume that all is well. Is this correct? I have read some discussions about this, but it's never quite clear if they are talking about whether the compiler itself is running x64 vs. x86, or whether the compiled code is x64 vs. x86, and how this is differentiated. So am I doing the right thing to generate a 32-bit, Win32, x-86 program?

    Read the article

  • What is the best (Windows) program launcher?

    - by AR
    One of the biggest general productivity boosters I've used is a good program launcher. I was a long-time user of SlickRun, and I've tried a few others. My current favorite is Executor - by far the best I've used. Other options: Executor: My current favorite Vista Start Menu: Pretty good, actually, but Executor is similar (binds to Win+Z) and much more flexible. Quicksilver: For Macs only, but it seems to be the gold standard against which most other launchers are measured. Google Desktop: Press Ctrl+Ctrl and it's a quick launcher! AutoHotKey: Much,much more than just a launcher - more than I need, really. SlickRun: simple and unobtrusive Launchy: Seems to be the launcher of choice for many StackOverflow users :) Colibri: "Type Ahead - Information at the tip of your wings". Quite a cool concept. Many, many others. Scott Hanselman outlines some more here. I realize that everyone will have their own preferences, but the question is: is there anything that really stands out in terms of speed, features, and especially productivity increase?

    Read the article

  • MVC and binding to List of Checkboxes

    - by Josh
    Here is my problem. I have a list of models that are displayed to the user. On the left is a checkbox for each model to indicate that the user wants to choose this model (in this case, we're building products a user can add to their shopping cart). The model has no concept of being chosen...it strictly has information about the product in question. I've talked with a few other developers after having gone through and the best I could come up with is getting the formcollection and string parsing the key values to determine whether the checkbox is checked or not. This doesn't seem ideal. I was thinking there would be something more strongly bound, but I can't figure out a way to do it. I tried creating another model that had a boolean property to represent being checked and a property of the model and passing a list of that model type to the view and creating a ActionResult on the controller that accepts a list of the new model / checked property, but it comes back null. Am I just thinking too much like web forms and should just continue on with parsing checkbox values? Here's what I've done for wrapping the models inside a collection: public class SelectableCollection[T] : IList[T] {} public class SelectableTrack{ public bool IsChecked{get;set;} public bool CurrentTrack{get;set;} } For the view, I inherit from ViewPage[SelectableCollection[SelectableTrack]] For the controller, I have this as the ActionResult: [HttpPost] public ActionResult SelectTracks(SelectableCollection sc) { return new EmptyResult(); } But when I break inside the ActionResult, the collection is null. Any reason why it isn't coming through?

    Read the article

  • Issue pushing object into an array JS

    - by Javacadabra
    I'm having an issue placing an object into my array in javascript. This is the code: $('.confirmBtn').click(function(){ //Get reference to the Value in the Text area var comment = $("#comments").val(); //Create Object var orderComment = { 'comment' : comment }; //Add Object to the Array productArray.push(orderComment); //update cookie $.cookie('order_cookie', JSON.stringify(productArray), { expires: 1, path: '/' }); }); However when I print the array this is the output: Array ( [0] => Array ( [stockCode] => CBL202659/A [quantity] => 8 ) [1] => Array ( [stockCode] => CBL201764 [quantity] => 6 ) [2] => TEST TEST ) I would like it to look like this: Array ( [0] => Array ( [stockCode] => CBL202659/A [quantity] => 8 ) [1] => Array ( [stockCode] => CBL201764 [quantity] => 6 ) [2] Array( [comment] => TEST TEST ) I added products to the array in a similar way and it worked fine: var productArray = []; // Will hold order Items $(".orderBtn").click(function(event){ //Check to ensure quantity > 0 if(quantity == 0){ console.log("Quantity must be greater than 0") }else{//It is so continue //Show the order Box $(".order-alert").show(); event.preventDefault(); //Get reference to the product clicked var stockCode = $(this).closest('li').find('.stock_code').html(); //Get reference to the quantity selected var quantity = $(this).closest('li').find('.order_amount').val(); //Order Item (contains stockCode and Quantity) - Can add whatever data I like here var orderItem = { 'stockCode' : stockCode, 'quantity' : quantity }; //Check if cookie exists if($.cookie('order_cookie') === undefined){ console.log("Creating new cookie"); //Add object to the Array productArray.push(orderItem);

    Read the article

  • JQuery, AJAX: Trying AJAX for the first time but cant get this to work...

    - by fwaokda
    Trying to get the code below to work but the success doesn't execute - the error does. How can I get more detailed information on what is exactly going wrong? I'll include the code for next.php in a pastebin link also. Thanks. [next.php: http://pastebin.com/Gnu2AfU8 ] $("a#next").click(function() { $.ajax({ type : 'POST', url : 'next.php', dataType : 'json', data : { nextID : $("a#next").attr("rel") }, success : function ( data ) { $("img#spotlight").attr("src",data.spotlightimage); $("div#showcase h1").text(data.title); $("div#showcase h2").text(data.subtitle); for(var i=0; i < data.size; i++) { $("ul#features").append("<li>").text(data.feature+i).append("</li>"); } $("div#showcase p").text(data.description); for(i=1; j < data.picsize; i++) { $("div.thumbnails ul").append("<li>").text(data.image+i).append("</li>"); } $("a#next").attr("rel", $a("a#next").attr("rel") + 1); }, error : function ( XMLHttpRequest, textStatus, errorThrown) { $("div#showcase h1").text("An error has occured."); } }); });

    Read the article

  • Invoking [SKProductsRequest start] hangs on iOS 4.0

    - by figelwump
    Encountering an issue with SKProductsRequest that is specific to iOS 4.0. The problematic code: - (void)requestProductData { NSSet *productIdentifiers = [NSSet setWithObjects:kLimitedDaysUpgradeProductId, kUnlimitedUpgradeProductId, nil]; self.productsRequest = [[SKProductsRequest alloc] initWithProductIdentifiers:productIdentifiers]; self.productsRequest.delegate = self; [self.productsRequest start]; } - (void)productsRequest:(SKProductsRequest *)request didReceiveResponse:(SKProductsResponse *)response { NSLog(@"didReceiveResponse"); } When [SKProductsRequest start] is invoked, the productsRequest:didReceiveResponse: delegate method is never invoked; further, the entire app hangs and is completely unresponsive to input. Obviously, this is a huge issue for our iOS 4.0 users as it not only breaks payments but makes the app completely unusable. Some other things to note: this only happens on iOS 4.0; iOS 4.2, 3.x are fine. Also: if the delegate is not set on the SKProductsRequest (i.e. comment out the line "self.productsRequest.delegate = self;"), the app doesn't hang (but of course in that case we have no way of getting the product info). Also, the problem still reproduces with everything stripped out of the productsRequest:didReceiveResponse: callback (that method never actually gets called). Finally, if the productIdentifiers NSSet object is initialized to an empty set, the hang doesn't occur. Has anybody else experienced this? Any ideas/thoughts on what could be going on here, and how we might be able to work around this?

    Read the article

  • Close modal window

    - by Eyal
    I have page with products. When adding a product to the shopping cart a modal window is fired up for confirmation - this can take up to 2 seconds. I want to show another modal window just before the confirmation modal to show "loading..." my problem is that I don't know how to close the "loading..." modal. This the code which fired up the confirmation modal: $(document).ready(function () { var $dialog = $('<div style="background-color:red"></div>') .html('<h1>loading...</h1>') .dialog({ autoOpen: false, title: 'loading...' }); $('.AddToCartButton').click(function () { $dialog.dialog('open'); }); }); On the 'confirmation' modal I am trying to close the the 'loading..." modal with this code: <script type="text/javascript"> $('#AddToCartButton').dialog('close'); </script> Please help. Thanks.

    Read the article

  • autoreload webpage on new release

    - by user3726562
    I have a website in PHP which is completely ajax-based. There is an index.php, but a part from it, all the other page are never rendered directly into the browser. Instead, all the post and get request are done tfrom javascript through ajax. So basically, if you go to /contact.php you will not see anything. All the pages are rendered inside index.php. So, there are a lot of people that use this page that are not very good in using the web. Asking them to "refresh" the page makes them to wonder what we are talking about. Unfortunately. The biggest issue happens when we do a new release. Especially the javascript code (but not only) can be the old one in a client's webpage as they maybe havent refreshed the page for some week (lol). S I do an svn-update and the new code is on the server. I just refresh my page and see the new features. However, the poor guys that dont really know how to make a refresh, will not see anything. I have added a big button on the page with the text "refresh". Which makes a location.reload. Hopefully this can help a few of them. But my question is: how to "force" the browser to reload itself when a new svn-version has been released? Hopefully in a simple way, without needing some node.js, javascript timer or stuff like that. It is also quite important to not refresh the page when the user is doing something with the page (maybe writing a mail in the UI, and then suddenly the page get refreshed: not so good).

    Read the article

  • Embedded Development Board

    - by ALF3130
    I'm new to the embedded development world and am looking to get my very first board. After some research, I realize that there aren't many choices with FPUs. This is important in my project as I'm going to be doing quite a bit of floating point computations. I found the Mini2440 which seems to run on the ARM920T core. This particular unit is perfect for my needs (decent price, all the right I/O ports, and a touch screen to boot) but it seems that it doesn't have an FPU. I don't know how big of a penalty I'd be paying for FP emulation, so I'm unsure of whether to pull the trigger on this one. That said: Can someone please confirm whether this product (Mini2440) has an FPU or not? My project will do image capture and analysis. Does anyone have any experience with running things like OpenMP on such platforms? Please suggest any other similar boards in the = $200 price range that have an FPU. This world is new to me. Any other advice or things I should be aware of is much appreciated.

    Read the article

  • Configuring visual subversion to exclude

    - by douglasrahn
    I have been searching for any documentation on how to exclude files for visual svn but have not found any. All the documentation I find seems to not match my file structure or I am missing some files/directories referenced. For example the only file I find with configuration items in it seems to be completely commented out and missing the miscellaneous section as well as any auto properties enable - etc... Ultimately I need to exclude some files so that my development can continue without SVN errors. I am constantly receiving errors for pbuser and other project files and would like to make sure this is not causing some of my other headaches. Here is information I would love to use but cant as it doesnt match: How to “fix” Subversion in XCode 3 Posted on December 10, 2008 by Rodney Aiglstorfer in Xcode If you don’t take the necessary steps to prepare for subversion, you will run into problems using it in XCode. This is because XCode produces files that “confuse” Subversion because it either thinks they are text files when they are really binary files or the reverse. To overcome these limitations, you need to make some simple changes to the subversion configuration file in your user home directory. Here are some steps you can follow to ensure that you will be able to use Subversion within XCode without any issues. Step 1. Open the subversion configuration file ~/.subversion/config NOTE: If the “.subversion” directory doesn’t exist yet then run this command which fails but will create the necessary files to get you started: svn status Step 2. Enable “global-ignores” and add new things to ignore Find the line that contains the text “global-ignores” and append the following text: build *~.nib *.so *.pbxuser *.mode* *.perspective* What I am looking for really is how to exclude the files I know I need to for Visual subversion - it shouldnt be different really from regular svn as it claims to use the same product just places a gui on it.

    Read the article

< Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >