Search Results

Search found 21511 results on 861 pages for 'appstore approval process'.

Page 761/861 | < Previous Page | 757 758 759 760 761 762 763 764 765 766 767 768  | Next Page >

  • syntax for MySQL INSERT with an array of columns

    - by Mike_Laird
    I'm new to PHP and MySQL query construction. I have a processor for a large form. A few fields are required, most fields are user optional. In my case, the HTML ids and the MySQL column names are identical. I've found tutorials about using arrays to convert $_POST into the fields and values for INSERT INTO, but I can't get them working - after many hours. I've stepped back to make a very simple INSERT using arrays and variables, but I'm still stumped. The following line works and INSERTs 5 items into a database with over 100 columns. The first 4 items are strings, the 5th item, monthlyRental is an integer. $query = "INSERT INTO `$table` (country, stateProvince, city3, city3Geocode, monthlyRental) VALUES ( '$country', '$stateProvince', '$city3', '$city3Geocode', '$monthlyRental')"; When I make an array for the fields and use it, as follows: $colsx = array('country,', 'stateProvince,', 'city3,', 'city3Geocode,', 'monthlyRental'); $query = "INSERT INTO `$table` ('$colsx') VALUES ( '$country', '$stateProvince', '$city3', '$city3Geocode', '$monthlyRental')"; I get a MySQL error - check the manual that corresponds to your MySQL server version for the right syntax to use near ''Array') VALUES ( 'US', 'New York', 'Fairport, Monroe County, New York', '(43.09)' at line 1. I get this error whether the array items have commas inside the single quotes or not. I've done a lot of reading and tried many combinations and I can't get it. I want to see the proper syntax on a small scale before I go back to foreach expressions to process $_POST and both the fields and values are arrays. And yes, I know I should use mysql_real_escape_string, but that is an easy later step in the foreach. Lastly, some clues about the syntax for an array of values would be helpful, particularly if it is different from the fields array. I know I need to add a null as the first array item to trigger the MySQL autoincrement id. What else? I'm pretty new, so please be specific.

    Read the article

  • how can I make a "downstream only" copy of a file in TFS

    - by jcollum
    I've got this SQL script that needs to exist in two places in source control. I want to have only one real copy of this file and keep a virtual copy of the file in the other solution. One is needed for a unit test and the other for a development tool. The files, should, by definition, always be the same. If they have differences then there's a problem with our process. In Sourcegear I could make a virtual copy of a specific version of a file and keep it somewhere else in the source tree. That doesn't seem to be possible in TFS. Is it possible in SVN? So what are my options here? Branching/merging -- which is what the TFS team says I should be doing here -- means just another step that I have to remember to do. Plus it isn't automatic and I would prefer that this be automated. Is there some way to run an exe on checkin of a specific file? I'm thinking if I could do that then I could do a checkout-edit-checkin of the downstream copy of the file.

    Read the article

  • Good C string libary

    - by chamakits
    Hello all. I recently got inspired to start up a project I've been wanting to code for a while. I want to do it in C, because memory handling is key this application. I was searching around for a good implementation of strings in C, since I know me doing it myself could lead to some messy buffer overflows, and I expect to be dealing with a fairly big amount of strings. I found this article which gives details on each, but they each seem like they have a good amount of cons going for them (don't get me wrong, this article is EXTREMELY helpful, but it still worries me that even if I were to choose one of those, I wouldn't be using the best I can get). I also don't know how up to date the article is, hence my current plea. What I'm looking for is something that may hold a large amount of characters, and simplifies the process of searching through the string. If it allows me to tokenize the string in any way, even better. Also, it should have some pretty good I/O performance. Printing, and formatted printing isn't quite a top priority. I know I shouldn't expect a library to do all the work for me, but was just wandering if there was a well documented string function out there that could save me some time and some work. Any help is greatly appreciated. Thanks in advance! EDIT: I was asked about the license I prefer. Any sort of open source license will do, but preferably GPL (v2 or v3). EDIt2: I found betterString (bstring) library and it looks pretty good. Good documentation, small yet versatile amount of functions, and easy to mix with c strings. Anyone have any good or bad stories about it? The only downside I've read about it is that it lacks Unicode (again, read about this, haven't seen it face to face just yet), but everything else seems pretty good. EDIT3: Also, preferable that its pure C.

    Read the article

  • Speed up csv export when using php from mysql database query

    - by John
    Ok, so i've got a web system (built on codeigniter & running on mysql) that allows people to query a database of postal address data by making selections in a series of forms until they arrive at the selection that want, pretty standard stuff. They can then buy that information and download it via that system. The queries run very fast, but when it comes to applying that query to the database,and exporting it to csv, once the datasets get to around the 30,000 record mark (each row has around 40 columns of which about 20 are all populated with on average 20 chars of data per cell) it can take 5 or so minutes to export to csv. So, my question is, what is the main cause for the slowness? Is it that the resultset of data from the query is so large, that it is running into memory issues? Therefore should i allow much more memory to the process? Or, is there a much more efficient way of exporting to csv from a mysql query that i'm not doing? Should i save the contents of the query to a temp table and simply export the temp table to csv? Or am i going about this all wrong? Also, is the fact that i'm using Codeigniters Active Record for this prohibitive due to the way that it stores the resultset? Any advice is welcome! Thank you for reading!

    Read the article

  • Filter design for audio signal.

    - by beanyblue
    What I am trying to do is simple. I have a few .wav files. I want to remove noise and filter out specific frequencies. I don't have matlab and I intend to write my own code for all the filters. Right now, I have a way to read the .wav file and dump out the structure into a text file. My questions are the following: Can I directly apply the digital filters on this sampled data?{ ie, can I directly do a convolution between my input samples and h(n) for the filter function that i choose?). How do I choose the number of coefficients for the Window function? I have octave, so if someone can point me to anything that gives me some idea on how to process the .wav file using octave, that would be great too. I want to be able to filter out the frequency and then listen to the sound again. Is this possible with octave? I'm just a beginner with these kinds of things, so please bear with me if my questions are too naive. Any help will be great.

    Read the article

  • Is it possible to reference remote content from chrome.manifest? (XULRunner)

    - by siemaa
    Hi, I have a xulrunner application and I've been trying to reference remote content from chrome.manifest file. Tt's an application for the company I work in; it's run on a number of computers (most of them are used by other employees as well) as a kind of an internet monitoring service. The problem I'd like to solve is this: updating the code of such application usually requires me to manually copy the modified files to every computer that the application is running on (I've had no luck trying to make automatic updates via xulrunner platform). This process has become very tedious. What I'd like to have is a web server, where all of the xul and js files would be accessible, so that every application could reference them from there. This would require me only to update the code on that server, and the applications (when restarted) would automatically get the latest code. What I managed to do: I can reference js scripts from a xul file using http based urls and everything works fine (I can use local, binary components etc.), although the xul file has to be local - that I'd like to change. But when I write in chrome.manifest a line like: content my_app http://path/to/app/files/ and then use the line in default/preferences/pref.js pref("toolkit.defaultChromeURI", "chrome://my_app/content/my_app.xul"); it just opens a console window (to test I manually run the application with the -console option) and no code gets executed. The file can be downloaded remotely using wget so I guess this isn't the web server issue. The applications work on Windows machines. Is there some kind of security issue causing such behavior or am I doing something wrong? Is it even possible to register remote, http based content as chrome?

    Read the article

  • Migrate from Oracle to MySQL

    - by Cassy
    Hi together. We ran into serious performance problems with our Oracle database and we would like to try to migrate to a MySQL-based database (either MySQL directly or, more preferrable, Infobright). The thing is, we need to let the old and the new system overlap for at least some weeks if not months, before we actually know, if all features of the new database match our needs. So, here is our situation: The Oracle database consists of multiple tables with each millions of rows. During the day, there are literally thousands of statements, which we cannot stop for migration. Every morning, new data is imported into the Oracle database, replacing some thousands of rows. Copying this process is not a problem, so we could, in theory, import in both databases in parallel. But, and here lies the challenge, for this to work, we need to have an export from the Oracle database with a consistent state from one day. (We cannot export some tables on Monday and some others on Tuesday, etc.) This means, that at least the export should be finished in less than one day. Our first thought was to dump the schema, but I wasn't able to find a tool to import an Oracle dump file into mysql. Exporting tables in CSV files might work, but I'm afraid it could take too long. So my question now is: What should I do? Is there any tool to import Oracle dump files into MySQL? Does anybody have any experience with such a large-scale migration? Thanks in advance, Cassy PS: Please, don't suggest performance optimization techniques for Oracle, we already tried a lot :-)

    Read the article

  • num_rows is 0 when it should be >0 for php mysqli code

    - by jpporterVA
    My num_rows is coming back as 0, and I've tried calling it several ways, but I'm stuck. Here is my code: $conn = new mysqli($dbserver, "dbuser", "dbpass", $dbname); // get the data $sql = 'SELECT AT.activityName, AT.createdOn FROM userActivity UA, users U, activityType AT WHERE U.userId = UA.userId and AT.activityType = UA.activityType and U.username = ? order by AT.createdOn'; $stmt = $conn->stmt_init(); $stmt->prepare($sql); $stmt->bind_param('s', $requestedUsername); $stmt->bind_result($activityName, $createdOn); $stmt->execute(); // display the data $numrows = $stmt->num_rows; $result=array("user activity report for: " . $requestedUsername . " with " . $numrows . " rows:"); $result[]="Created On --- Activity Name"; while ($stmt->fetch()) { $msg = " " . $createdOn . " --- " . $activityName . " "; $result[] = $msg; } $stmt->close(); There are multiple rows found, and the fetch loop process them just fine. Any suggestions on what will enable me to get the number of rows returned in the query? Suggestions are much appreciated. Thanks in advance.

    Read the article

  • Calling member functions dynamically

    - by user652511
    I'm pretty sure it's possible to call a class and its member function dynamically in Delphi, but I can't quite seem to make it work. What am I missing? // Here's a list of classes (some code removed for clarity) moClassList : TList; moClassList.Add( TClassA ); moClassList.Add( TClassB ); // Here is where I want to call an object's member function if the // object's class is in the list: for i := 0 to moClassList.Count - 1 do if oObject is TClass(moClassList[i]) then with oObject as TClass(moClassList[i]) do Foo(); I get an undeclared identifier for Foo() at compile. Clarification/Additional Information: What I'm trying to accomplish is to create a Change Notification system between business classes. Class A registers to be notified of changes in Class B, and the system stores a mapping of Class A - Class B. Then, when a Class B object changes, the system will call a A.Foo() to process the change. I'd like the notification system to not require any hard-coded classes if possible. There will always be a Foo() for any class that registers for notification. Maybe this can't be done or there's a completely different and better approach to my problem. By the way, this is not exactly an "Observer" design pattern because it's not dealing with objects in memory. Managing changes between related persistent data seems like a standard problem to be solved, but I've not found very much discussion about it. Again, any assistance would be greatly appreciated. Jeff

    Read the article

  • Huge burst of memory in c# service, what could be the cause?

    - by Daniel
    I'm working on a c# service application and i have this problem where out of no where and for no obvious reason, the memory for the process will climb from 150mb to almost 2gb in about 5 seconds and then back to 150mb. But nothing in our system should be using any where near that amount of memory (so its probably a bug somewhere). It might be a tight while true loop somewhere but the cpu usage at the time was very low so i thought i'd look for other ideas. Now the weirder thing is when i compile the service for 64bit, the same massive burst will occur except it exceeded 10gb of ram (paging most of it) and it just caused lots of problems with the computer and everything running on it. After a while it shuts down but it looks like windows is still willing to give it more memory. Would you have any ideas or tools that i can use in order to find this? Yes it has lots of logging however nothing in the logs stand out as to why this is happening. I can run the service in a console app mode, so my next test was going to be running it in visual studio debugger and see if i can find anything. It only happens occasionally but usually about 10-20 minutes after startup. On 32bit mode it cleans up and continues on like normally. 64bit mode it crashes after a while and uses stupid amounts of memory. But i'm really stumped as to why this is happening!!!

    Read the article

  • android app doesn't show on the emulator

    - by Anna Finela Constantino
    I made an android application using eclipse and it is working fine when I started developing my app. But then as I continue to develop the app the emulator seems to be not updating the application prior to the changes I have made on the code. So I tried deleting my avd and create a new one every time I run my app, and that seems to have worked. Now my problem is that my emulator doesn't show my app. It says "Failed to install *.apk on device 'emulator-5554': An established connection was aborted by the software in your host machine". I searched for ways to solved it but none of it seems to have worked. I tried killing the adb process (as most my searches would say) on the task manager but still my app doesn't show on the emulator. The emulator is running and all but my icon is nowhere to be found. Am I misssing out something? Is the problem connected to the first problem i had before? As I said, I started developing android app recently, so please bear with me. :) I appreciate all your help.Thanks in advance.

    Read the article

  • JQuery Control Update Not Happening

    - by Mad Halfling
    Hi, I've got a script that disables a button input control, empties a table body and then (after an AJAX call) re-populates it. It all works fine but sometimes there are a lot of rows in the table so the browser (IE) takes a while to empty and refill it. The strange thing is, while the rows are being emptied, the button still appears to be enabled, however if I put an alert between the button being disabled and the tbody being emptied, the button works properly, disabling visibly before the the alert comes up. Is there any way I can get the button to update before the resource consuming table emptying process/command commences? Thx MH Code sample, as requested (but it's not complex, so I didn't initially include it) $('#Search').attr('disabled', true); $('#StatusSpan').empty(); $('#DisplayTBody').empty(); then I perform my AJAX call, re-enable the button and repopulate the table. As I mentioned, normally this is really quick and isn't a problem, but if there are, say, 1500 rows in the table it takes a while to clear down but the 'Search' button doesn't update on the screen, however if I put an alert after the .attr('disabled' line the button visibly updates when the alert box is up, but without that the button doesn't visibly disable until after the table clears (which is about 3 or 4 seconds with 1500 rows), it just stays in it's down/"mid-press" state. I don't have a problem with the time the browser is taking to render the table changes, that's just life, but I need the users to see visible feedback so they know the search has started

    Read the article

  • The cross-thread usage of "HttpContext.Current" property and related things

    - by smwikipedia
    I read from < Essential ASP.NET with Examples in C# the following statement: Another useful property to know about is the static Current property of the HttpContext class. This property always points to the current instance of the HttpContext class for the request being serviced. This can be convenient if you are writing helper classes that will be used from pages or other pipeline classes and may need to access the context for whatever reason. By using the static Current property to retrieve the context, you can avoid passing a reference to it to helper classes. For example, the class shown in Listing 4-1 uses the Current property of the context to access the QueryString and print something to the current response buffer. Note that for this static property to be correctly initialized, the caller must be executing on the original request thread, so if you have spawned additional threads to perform work during a request, you must take care to provide access to the context class yourself. I am wondering about the root cause of the bold part, and one thing leads to another, here is my thoughts: We know that a process can have multiple threads. Each of these threads have their own stacks, respectively. These threads also have access to a shared memory area, the heap. The stack then, as I understand it, is kind of where all the context for that thread is stored. For a thread to access something in the heap it must use a pointer, and the pointer is stored on its stack. So when we make some cross-thread calls, we must make sure that all the necessary context info is passed from the caller thread's stack to the callee thread's stack. But I am not quite sure if I made any mistake. Any comments will be deeply appreciated. Thanks. ADD Here the stack is limited to user stack.

    Read the article

  • Background loading javascript into iframe without using jQuery/Ajax?

    - by user210099
    I'm working on an offline only help system which requires loading a large amount of search-related data into an iframe before the search functionality can be used. Due to the folder structure of the project, I am unable to use Ajax-related background load methods, since the files I need are loaded a few directories "up and over." I have written some code which delays the loading of the help data until the rest of the webpage is loaded. The help data consists of a bunch of javascript files which have information about the terms, ect that exist in the help books which are installed on the system. The webpage works fine, until I start to load this help data into a hidden iframe. While the javascript files are loading, I can not use any of the webpage. Links that require a small files be downloaded for hover over effects don't show up, javascript (switching tabs on the page) has no effect. I'm wondering if this is just a limitation of the way javascript works, or if there's something else going on here. Once all the files are loaded for the help system, the webpage works as expected. function test(){ var MGCFrame = eval("parent.parent"); if((ALLFRAMESLOADED == true)){ t2 = MGCFrame.setTimeout("this.IHHeader.frames[0].loadData()",1); } else{ t1 = MGCFrame.setTimeout("this.IHHeader.frames[0].test()",1000); } } Load data simply starts the data loading process. Thanks for any help you can provide.

    Read the article

  • How to current snapshot of MySQL Table and store it into CSV file(after creating it) ?

    - by Rachel
    I have large database table, approximately 5GB, now I wan to getCurrentSnapshot of Database using "Select * from MyTableName", am using PDO in PHP to interact with Database. So preparing a query and then executing it // Execute the prepared query $result->execute(); $resultCollection = $result->fetchAll(PDO::FETCH_ASSOC); is not an efficient way as lots of memory is being user for storing into the associative array data which is approximately, 5GB. My final goal is to collect data returned by Select query into an CSV file and put CSV file at an FTP Location from where Client can get it. Other Option I thought was to do: SELECT * INTO OUTFILE "c:/mydata.csv" FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY "\n" FROM my_table; But I am not sure if this would work as I have cron that initiates the complete process and we do not have an csv file, so basically for this approach, PHP Scripts will have to create an CSV file. Do a Select query on the database. Store the select query result into the CSV file. What would be the best or efficient way to do this kind of task ? Any Suggestions !!!

    Read the article

  • Joomla User Login Question

    - by user277127
    I would like to enable users of my existing web app to login to Joomla with the credentials already stored in my web app's database. By using the Joomla 1.5 authentication plugin system -- http://docs.joomla.org/Tutorial:Creating_an_Authentication_Plugin_for_Joomla_1.5 -- I would like to bypass the Joomla registration process and bypass creating users in the Joomla database altogether. My thought had been that I could simply populate a User object, which would be stored in the Session, and that this would replace the need to store a user in the Joomla database. After looking through the code surrounding user management in Joomla, it seems like any time you interact with the User object, the database is being queried. It therefore seems like my initial idea won't work. Is that right? It looks like, in order to achieve the effect I want, I will have to actually register a user from within the authentication plugin at the time they first login. This is not ideal, so before I go forward with it, I wanted to check with Joomla developers whether it is possible to do what I described above. Thanks in advance -- I am new to Joomla and greatly appreciate your help!

    Read the article

  • deployd authentification using jquery ajax

    - by user2507987
    I have installed deployd in my debian 7.0.0 64 bit, I have also succesfully installed mongodb in it, I have create some collection and user collection in deployd dashboard, then using user guide how to connect and query the table in deployd, I choose jquery ajax to log in to deployd from my localhost site and after login success I try to get/post some data, but somehow deployd return access denied. I have create collection name it people, and then at the GET, POST, PUT Event I have write this code : cancelUnless(me, "You are not logged in", 401); then using this ajax code, I try to login and POST new people data: $(document).ready(function(){ /* Create query for username and password for login */ var request = new Object; request.username = 'myusername'; request.password = 'mypassword'; submitaddress = "http://myipaddress:myport/users/login"; $.ajax({ type: "POST", url: submitaddress, data: request, cache: false, success: function(data){ var returndata = eval(data); /* After Login success try to post people data */ if (returndata){ var request2 = new Object; request2.name = 'People Name'; submitaddress2 = "http://myipaddress:myport/people"; $.ajax({ type: "POST", url: submitaddress2, data: request2, cache: false, success: function(){ } }) } } } }); }) The login process success, it's return session id and my user id, but after login success and I try to POST people data it's return "You are not logged in", can anyone help me, what is the correct way to login to deployd using jquery from other website(cross domain)?

    Read the article

  • How does a java compiler resolve a non-imported name

    - by gexicide
    Consider I use a type X in my java compilation unit from package foo.bar and X is not defined in the compilation unit itself nor is it directly imported. How does a java compiler resolve X now efficiently? There are a few possibilities where X could reside: X might be imported via a star import a.b.* X might reside in the same package as the compilation unit X might be a language type, i.e. reside in java.lang The problem I see is especially (2.). Since X might be a package-private type, it is not even required that X resides in a compilation unit that is named X.java. Thus, the compiler must look into all entries of the class path and search for any classes in a package foo.bar, it then must read every class that is in package foo.bar to check whether X is included. That sounds very expensive. Especially when I compile only a single file, the compiler has to read dozens of class files only to find a type X. If I use a lot of star imports, this procedure has to be repeated for a lot of types (although class files won't be read twice, of course). So is it advisable to import also types from the same package to speed up the compilation process? Or is there a faster method for resolving an unimported type X which I was not able to find?

    Read the article

  • How can I save a file on an http server using just http requests and javascript?

    - by user170902
    hi all, I'm trying to understand some of the basics of web servers/html/javasacript etc. I'm not interested in any of the various frameworks like php/asp, i'm just trying to get a low level look at things (for now). At the moment I'm trying to understand how data can be sent to/saved on the backend, but i must admit that i'm getting a bit lost under the various specs/technical stuff on w3 at the moment! If I have some data, say xml, that I want to save on the backend how do I go about it? I assume that I would have to use something like an HTTP PUT or POST Request to an html doc that contains some javascript that in turn would process the data, e.g. save it somewhere. Now from googling around I can see that this doesn't seem to be the case, so my assumptions are completely wrong! So how is it done? Can it be done, or do I have to use something like php or asp? TIA. bg

    Read the article

  • What goes into the "Controller" in "MVC"?

    - by P72endragon
    I think I understand the basic concepts of MVC - the Model contains the data and behaviour of the application, the View is responsible for displaying it to the user and the Controller deals with user input. What I'm uncertain about is exactly what goes in the Controller. Lets say for example I have a fairly simple application (I'm specifically thinking Java, but I suppose the same principles apply elsewhere). I organise my code into 3 packages called app.model, app.view and app.controller. Within the app.model package, I have a few classes that reflect the actual behaviour of the application. These extends Observable and use setChanged() and notifyObservers() to trigger the views to update when appropriate. The app.view package has a class (or several classes for different types of display) that uses javax.swing components to handle the display. Some of these components need to feed back into the Model. If I understand correctly, the View shouldn't have anything to do with the feedback - that should be dealt with by the Controller. So what do I actually put in the Controller? Do I put the public void actionPerformed(ActionEvent e) in the View with just a call to a method in the Controller? If so, should any validation etc be done in the Controller? If so, how do I feedback error messages back to the View - should that go through the Model again, or should the Controller just send it straight back to View? If the validation is done in the View, what do I put in the Controller? Sorry for the long question, I just wanted to document my understanding of the process and hopefully someone can clarify this issue for me!

    Read the article

  • Scientific Data processing (Graph comparison and interpretation)

    - by pinkynobrain
    Hi stackoverflow friends, I'm trying to write a program to automate one of my more boring and repetitive work tasks. I have some programming experience but none with processing or interpreting large volumes of data so I am seeking your advice (both suggestions of techniques to try and also things to read to learn more about doing this stuff). I have a piece of equipment that monitors an experiment by taking repeated samples and displays the readings on its screen as a graph. The input of experiment can be altered and one of these changes should produce a change in a section of the graph which I currently identify by eye and is what I'm looking for in the experiment. I want to automate it so that a computer looks at a set of results and spots the experiment input that causes the change. I can already extract the results from the machine. Currently they results for a run are in the form of an integer array with the index being the sample number and the corresponding value being the measurement. The overall shape of the graph will be similar for each experiment run. The change I'm looking for will be roughly the same and will occur in approximately the same place every time for the correct experiment input. Unfortunately there are a few gotchas that make this problem more difficult. There is some noise in the measuring process which mean there is some random variation in the measured values between different runs. Although the overall shape of the graph remains the same. The time the experiment takes varies slightly each run causing two effects. First, the a whole graph may be shifted slightly on the x axis relative to another run's graph. Second, individual features may appear slightly wider or narrower in different runs. In both these cases the variation isn't particularly large and you can assume that the only non random variation is caused by the correct input being found. Thank you for your time, Pinky

    Read the article

  • Filling in uninitialized array in java? (or workaround!)

    - by AlexRamallo
    Hello all, I'm currently in the process of creating an OBJ importer for an opengles android game. I'm relatively new to the language java, so I'm not exactly clear on a few things. I have an array which will hold the number of vertices in the model(along with a few other arrays as well): float vertices[]; The problem is that I don't know how many vertices there are in the model before I read the file using the inputstream given to me. Would I be able to fill it in as I need to like this?: vertices[95] = 5.004f; //vertices was defined like the example above or do I have to initialize it beforehand? if the latter is the case then what would be a good way to find out the number of vertices in the file? Once I read it using inputstreamreader.read() it goes to the next line until it reads the whole file. The only thing I can think of would be to read the whole file, count the number of vertices, then read it AGAIN the fill in the newly initialized array. Is there a way to dynamically allocate the data as is needed?

    Read the article

  • Automated browser testing: How to test JavaScript in web pages?

    - by Dave
    I am trying to write an application that will test a series of web-pages programmatically. The web pages being tested have JavaScript embedded within them which alter the structure of the HTML when they complete execution. It is then the goal to take the final HTML (post-execution of the embedded JavaScript) and compare it against a known output. Essentially, the Input --- Output for the test application is: URL ---[retrieve HTML]--- HTML ---[execute JS, then compare]--- PASS/FAIL Here is the challenge: I have been unable to find a solution that is able to take the HTML I retrieve from the URL and process the JavaScript, as a browser would, and generate the final HTML a user might see from "View Source" on the same page within the browser. It would be very surprising if this sort of approach has not been made before, so I'm hoping someone out there knows of a fitting solution for this application/problem? If at all possible, I'm hoping for a solution that integrates with .NET (I've tried using the WebBrowser, with no luck). However, if there is an existing 3rd party application that can do exactly this, that would be quite acceptable. Thanks in advance for the suggestions! Dave

    Read the article

  • NHibernate / ORM - Child Update over Web Service

    - by tyndall
    What is the correct way to UPDATE a child object with NHibernate but not have to "awake" the parent object. Lets say you would like to try to avoid this because the parent object is large or expensive to initiate. Lets assume classes are called Author(parent) and Book(child). (still, trying to avoid instantiating Author) Book comes back over a web service as XML. It gets deserialized back into a CLR object. Book has an AuthorId property which allows this to happen. But it also has a Author property. Problem, comes when you try to SaveOrUpdate() Book and the author_id in the database gets wiped out because the Author was null when the object gets deserialized. This seems like this would be a common problem. What is the workaround? Also, if you instantiate the Author and it has a Books property. The book you are trying to update is already one of these books (List<Book>). We have also run into the "a different object with the same identifier value was already associated with the session" problems. What is the standard process to update a child over a web service?

    Read the article

  • Update MySQl table onDrop?

    - by dougvt
    Hi all. I am writing a PHP/MySQL application (using CodeIgniter) that uses some jQuery functionality for dragging table rows. I have a table in which the user can drag rows to the desired order (kind of a queue for which I need to preserve the rank of each row). I've been trying to figure out how to (and whether I should) update the database each time the user drops a row, in order to simplify the UI and avoid a "Save" button. I have the jQuery working and can send a serialized list back to the server onDrop, but is it good design practice to run an update query this often? The table will usually have 30-40 rows max, but if the user drags row 1 far down the list, then potentially all the rows would need to be updated to update the rank field. I've been wondering whether to send a giant query to the server, to loop through the rows in PHP and update each row with its own Update query, to send a small serialized list to a stored procedure to let the server do all the work, or perhaps a better method I haven't considered. I've read that stored procedures in MySQL are not very efficient and use a separate process for each call. Any advice as to the right solution here? Thanks very much for your help!

    Read the article

< Previous Page | 757 758 759 760 761 762 763 764 765 766 767 768  | Next Page >