Search Results

Search found 11318 results on 453 pages for 'josh close'.

Page 354/453 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • User control always crashes Visual Studio

    - by NickAldwin
    I'm trying to open a user control in one of our projects. It was created, I believe, in VS 2003, and the project has been converted to VS2008. I can view the code fine, but when I try to load the designer view, VS stops responding and I have to close it with the task manager. I have tried leaving it running for several minutes, but it does not do anything. I ran "devenv /log" but didn't see anything unusual in the log. I can't find a specific error message anywhere. Any idea what the problem might be? Is there a lightweight editing mode I might be able to use or something? The reason I need to have a look at the visual representation of this control is to decide where to insert some new components. I've tried googling it and searching SO, but either I don't know what to search or there is nothing out there about this. Any help is appreciated. (The strangest thing is that the user control seems to load fine in another project which references, but VS crashes as soon as I even so much as click on it in that project.)

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

  • login to website with post method

    - by druffmuff
    I want to log in into a website with c#. Here's the html code of the forumlar: <form action="http://www.site.com/login.php" method="post" name="login" id="login"> <table border="0" cellpadding="2" cellspacing="0"> <tbody> <tr><td><b>User:</b></td><td colspan=\"2\"><b>Passwort:</b></td></tr> <tr> <td><input class="inputbg" name="user" type="text"></td> <td><input class="inputbg" name="password" type="password"></td> <td><input type="submit" name="user_control" value="Eingabe" class="buttonbg" ></td> </tr> </tbody></table></form> I actually tried it like this: HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://www.site.com/login.php"); request.Method = "POST"; using (StreamWriter writer = new StreamWriter(request.GetRequestStream(), Encoding.ASCII)) { writer.Write("user=user&password=pass&user_control=Eingabe"); } HttpWebResponse response = (HttpWebResponse)request.GetResponse(); using (StreamReader reader = new StreamReader(response.GetResponseStream())) { stream = new StreamWriter("login.html"); stream.Write(reader.ReadToEnd()); stream.Close(); } But this is not working. Any Ideas, why this is failing?

    Read the article

  • JSON Twitter List in C#.net

    - by James
    Hi, My code is below. I am not able to extract the 'name' and 'query' lists from the JSON via a DataContracted Class (below) I have spent a long time trying to work this one out, and could really do with some help... My Json string: {"as_of":1266853488,"trends":{"2010-02-22 15:44:48":[{"name":"#nowplaying","query":"#nowplaying"},{"name":"#musicmonday","query":"#musicmonday"},{"name":"#WeGoTogetherLike","query":"#WeGoTogetherLike"},{"name":"#imcurious","query":"#imcurious"},{"name":"#mm","query":"#mm"},{"name":"#HumanoidCityTour","query":"#HumanoidCityTour"},{"name":"#awesomeindianthings","query":"#awesomeindianthings"},{"name":"#officeformac","query":"#officeformac"},{"name":"Justin Bieber","query":"\"Justin Bieber\""},{"name":"National Margarita","query":"\"National Margarita\""}]}} My code: WebClient wc = new WebClient(); wc.Credentials = new NetworkCredential(this.Auth.UserName, this.Auth.Password); string res = wc.DownloadString(new Uri(link)); //the download string gives me the above JSON string - no problems Trends trends = new Trends(); Trends obj = Deserialise<Trends>(res); private T Deserialise<T>(string json) { T obj = Activator.CreateInstance<T>(); using (MemoryStream ms = new MemoryStream(Encoding.Unicode.GetBytes(json))) { DataContractJsonSerializer serialiser = new DataContractJsonSerializer(obj.GetType()); obj = (T)serialiser.ReadObject(ms); ms.Close(); return obj; } } [DataContract] public class Trends { [DataMember(Name = "as_of")] public string AsOf { get; set; } //The As_OF value is returned - But how do I get the //multidimensional array of Names and Queries from the JSON here? }

    Read the article

  • How can we serialize a class that is not a custom class of our own?

    - by Doug
    I need to look at the properties of an object and I cannot instantiate this object in the proper state on my dev machine. I need my client to run some code on her machine, serialize the object in question to disk and then I can analyze the file. Here is the class I want to serialize. System.Security.AccessControl.RegistrySecurity Here is my code: Private Sub SerializeRegSecurity(ByVal regKey As RegistryKey) Try Dim regSecurity As System.Security.AccessControl.RegistrySecurity = regKey.GetAccessControl() Dim oXS As XmlSerializer = New XmlSerializer(GetType(System.Security.AccessControl.RegistrySecurity)) Dim oStmW As StreamWriter Dim regDebugFilePath As String = Path.Combine(My.Computer.FileSystem.SpecialDirectories.Desktop, "RegDebugFile.xml") 'Serialize object to XML and write it to XML file oStmW = New StreamWriter(regDebugFilePath) oXS.Serialize(oStmW, regSecurity) oStmW.Close() Catch ex As Exception Console.WriteLine(ex.ToString) End Try End Sub And here's what I end up with in my XML file: <?xml version="1.0" encoding="utf-8"?> Any ideas on how to accomplish what I am trying to do? How can we serialize a class that is not a custom class of our own? Thanks for ANY help. Even an alternate method.

    Read the article

  • Parallel version of loop not faster than serial version

    - by Il-Bhima
    I'm writing a program in C++ to perform a simulation of particular system. For each timestep, the biggest part of the execution is taking up by a single loop. Fortunately this is embarassingly parallel, so I decided to use Boost Threads to parallelize it (I'm running on a 2 core machine). I would expect at speedup close to 2 times the serial version, since there is no locking. However I am finding that there is no speedup at all. I implemented the parallel version of the loop as follows: Wake up the two threads (they are blocked on a barrier). Each thread then performs the following: Atomically fetch and increment a global counter. Retrieve the particle with that index. Perform the computation on that particle, storing the result in a separate array Wait on a job finished barrier The main thread waits on the job finished barrier. I used this approach since it should provide good load balancing (since each computation may take differing amounts of time). I am really curious as to what could possibly cause this slowdown. I always read that atomic variables are fast, but now I'm starting to wonder whether they have their performance costs. If anybody has some ideas what to look for or any hints I would really appreciate it. I've been bashing my head on it for a week, and profiling has not revealed much.

    Read the article

  • TortoiseSVN lists files as modified, but they are identical

    - by BJ Safdie
    I am merging a hot fix from our QA branch back into our Dev branch. Five files have changed. I do a fresh checkout of the Dev branch. I then do a merge (range of revisions) from QA into the Dev working copy. It brings in five files and there is a conflict on an external and ignore property -- which I resolve by "using local" (dev). When I check modifications or commit, I expect to see the five files I merged as the only changes. However, I get close to 700 "modified" files showing up in the commit dialog. If I select one of these file and "Compare with base," WinMerge comes up and says the "files are identical." I have tried this with the file dates set to "last committed" and not. Why are all of these files showing up as modified, when they are identical? What in the merge is causing this? How do I prevent SVN/TortoiseSVN from getting confused this way in the future?

    Read the article

  • Lazarus Pascal - DB Connection - clarification

    - by itsols
    The following code is from the docs here: Program ConnectDB var AConnection : TSQLConnection; Procedure CreateConnection; begin AConnection := TIBConnection.Create(nil); AConnection.Hostname := 'localhost'; AConnection.DatabaseName := '/opt/firebird/examples/employee.fdb'; AConnection.UserName := 'sysdba'; AConnection.Password := 'masterkey'; end; begin CreateConnection; AConnection.Open; if Aconnection.Connected then writeln('Succesful connect!') else writeln('This is not possible, because if the connection failed, ' + 'an exception should be raised, so this code would not ' + 'be executed'); AConnection.Close; AConnection.Free; end. The main body of the code makes sense to me BUT I don't get where TSQLConnection came from. I cannot use CTRL + Space to autocomplete it either, which means my program has no reference to it. I'm trying to connect to Postgres by the way. Can someone please state what TSQLConnection is? Thanks!

    Read the article

  • Really frustrated: Help writing a sample twitter app

    - by Jack
    I have installed WAMP. I have enable cURL in php.ini. I want to implement a twitter app that posts a new status message for a user. Here's my code <?php function updateTwitter($status) { $username = 'xxxxxx'; $password = 'xxxx'; $url = 'http://twitter.com/statuses/update.xml'; $postargs = 'status='.urlencode($status); $responseInfo=array(); $ch = curl_init($url); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 2); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_PROXY,"localhost:80"); curl_setopt ($ch, CURLOPT_POST, true); // Give CURL the arguments in the POST curl_setopt ($ch, CURLOPT_POSTFIELDS, $postargs); // Set the username and password in the CURL call curl_setopt($ch, CURLOPT_USERPWD, $username.':'.$password); // Set some cur flags (not too important) $response = curl_exec($ch); if($response === false) { echo 'Curl error: ' . curl_error($ch); } else { echo 'Operation completed without any errors<br/>'; } // Get information about the response $responseInfo=curl_getinfo($ch); // Close the CURL connection curl_close($ch); // Make sure we received a response from Twitter if(intval($responseInfo['http_code'])==200){ // Display the response from Twitter echo $response; }else{ // Something went wrong echo "Error: " . $responseInfo['http_code']; } curl_close($ch); } updateTwitter("Just finished a sweet tutorial on http://brandontreb.com"); ?> I am getting the following output Operation completed without any errors Error: 404 Please help.

    Read the article

  • "hour" int taken from NSDate not behaving as expected at midnight??

    - by Eric
    I feel like I've lost my mind. Can someone tell me what's going on here? Also, I'm sure there is a better way to do what I'm trying to do, but I'm not interested in that now. I'd just like to solve the mystery of why my ints are not responding to logic as expected. // Set "At: " field close to current time NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; [dateFormatter setDateFormat:@"HH"]; int hour = [[dateFormatter stringFromDate:[NSDate date]] intValue]; [dateFormatter setDateFormat:@"mm"]; int minute = [[dateFormatter stringFromDate:[NSDate date]] intValue]; NSLog(@"currently %i:%i",hour, minute); if(hour >= 12){ // convert to AM/PM selectedMeridiem = 1; if(hour != 12){ hour = hour - 12; } } else{ selectedMeridiem = 0; } selectedHour = hour - 1; if(selectedHour <= 0){ selectedHour = 11; } When I debug the above code with my clock set to 12:XX AM, the integer "hour" returned is 0. But then any if statements with the condition if(hour == 0) are not evaluated. Likewise, this would not be evaluated either: if(hour < 1). The code above puts the hour int into another int, selectedHour (don't worry about why I'm doing this for now), but selectedHour suffers from the same weird behavior; the if(selectedHour <= 0) line is never evaluated. Am I going crazy, or am I just an idiot? Maybe there's some behavior of 0 integers that I'm not aware of. All of my code runs fine as long as it's not 12:XX AM.

    Read the article

  • jQuery crashing Internet Explorer

    - by Bradley Bell
    Hello. Okay basically, I'm designing and developing a fairly complicated website which revolves around the use of jQuery.. My knowlege of jQuery is really poor and this is the first time I've properly used it. I posted a question here before about the script and apparently its awful, but I didn't show you exactly what I was actually writing it for, which I can now.. Because I've uploaded it onto a test directory. It now works fine in every browser other than IE. The CSS styling is getting there and it should be close to finish soon! However, Internet Explorer is showing bad problems.. In IE 7,8 it looks fine but when you go to hover over a link, it immediately crashes. IE 6, the display doesn't seem to be working properly at all. But IE 6 is a lesser problem. If you could take just 5 or 10 minutes to potentially rewrite a simple script which would potentially take me 10 hours, I would be so so grateful! Heres the site - http://openyourheart.org.uk/test/index.html I can send all the files zipped if required. Thankyou in advance. Bradley

    Read the article

  • Upload using python script takes very long on one laptop as compared to another

    - by Engr Am
    I have a python 2.7 code which uses STORBINARY function for uploading files to an ftp server and RETRBINARY for downloading from this server. However, the issue is the upload is taking a very long time on three laptops from different brands as compared to a Dell laptop. The strange part is when I manually upload any file, it takes the same time on all the systems. The manual upload rate and upload rate with the python script is the same on the Dell Laptop. However, on every other brand of laptop (I have tried with IBM, Toshiba, Fujitsu-Siemens) the python script has a very low upload rate than the manual attempt. Also, on all these other laptops, the upload rate using the python script is the same (1Mbit/s) while the manual upload rate is approx. 8 Mbit/s. I have tried to vary the filesize for the upload to no avail. TCP Optimizer improved the download rate on all the systems but had no effect on the upload rate. Download rate using this script on all the systems is fine and same as the manual download rate. I have checked the server and it has more than 90% free space. The network connection is the same for all the laptops, and I try uploading only with one laptop at a time. All the laptops have almost the same system configurations, same operating system and approximately the same free drive space. If anything the Dell laptop is a little less in terms of processing power and RAM than 2 of the others, but I suppose this has no effect as I have checked many times to see how much was the CPU usage and network usage during these uploads and downloads, and I am sure that no other virus or program has been eating up my bandwidth. Here is the code ('ftp' and 'file_path' are inputs to the function): path,filename=os.path.split(file_path) filesize=os.path.getsize(file_path) deffilesize=(filesize/1024)/1024 f = open(file_path, "rb") upstart = time.clock() print ftp.storbinary("STOR "+filename, f) upende = time.clock()-upstart outname="Upload " f.close() return upende, deffilesize, outname

    Read the article

  • Simulated Annealing and Yahtzee!

    - by Jasie
    I've picked up Programming Challenges and found a Yahtzee! problem which I will simplify: There are 13 scoring categories There are 13 rolls by a player (comprising a play) Each roll must fit in a distinct category The goal is to find the maximum score for a play (the optimal placement of rolls in categories); score(play) returns the score for a play Brute-forcing to find the maximum play score requires 13! (= 6,227,020,800) score() calls. I choose simulated annealing to find something close to the highest score, faster. Though not deterministic, it's good enough. I have a list of 13 rolls of 5 die, like: ((1,2,3,4,5) #1 (1,2,6,3,4),#2 ... (1,4,3,2,2) #13 ) And a play (1,5,6,7,2,3,4,8,9,10,13,12,11) passed into score() returns a score for that play's permutation. How do I choose a good "neighboring state"? For random-restart, I can simply choose a random permutation of nos. 1-13, put them in a vector, and score them. In the traveling salesman problem, here's an example of a good neighboring state: "The neighbours of some particular permutation are the permutations that are produced for example by interchanging a pair of adjacent cities." I have a bad feeling about simply swapping two random vector positions, like so: (1,5,6,7, 2 , 3,4,8,9,10, 13, 12,11) # switch 2 and 13 (1,5,6,7, 13, 3,4,8,9,10, 2 , 12,11) # now score this one But I have no evidence and don't know how to select a good neighboring state. Anyone have any ideas on how to pick good neighboring states?

    Read the article

  • JUnit: checking if a void method gets called

    - by nkr1pt
    I have a very simple filewatcher class which checks every 2 seconds if a file has changed and if so, the onChange method (void) is called. Is there an easy way to check ik the onChange method is getting called in a unit test? code: public class PropertyFileWatcher extends TimerTask { private long timeStamp; private File file; public PropertyFileWatcher(File file) { this.file = file; this.timeStamp = file.lastModified(); } public final void run() { long timeStamp = file.lastModified(); if (this.timeStamp != timeStamp) { this.timeStamp = timeStamp; onChange(file); } } protected void onChange(File file) { System.out.println("Property file has changed"); } } @Test public void testPropertyFileWatcher() throws Exception { File file = new File("testfile"); file.createNewFile(); PropertyFileWatcher propertyFileWatcher = new PropertyFileWatcher(file); Timer timer = new Timer(); timer.schedule(propertyFileWatcher, 2000); FileWriter fw = new FileWriter(file); fw.write("blah"); fw.close(); Thread.sleep(8000); // check if propertyFileWatcher.onChange was called file.delete(); }

    Read the article

  • WinForms programing - Modal and Non-Modal forms problem

    - by Povilas
    I have a problem with modality of the forms under C#.NET. Let's say I have main form #0 (see the image below). This form represents main application form, where user can perform various operations. However, from time to time, there is a need to open additional non-modal form to perform additional main application functionality supporting tasks. Let's say this is form #1 in the image. On this #1 form there might be opened few additional modal forms on top of each other (#2 form in the image), and at the end, there is a progress dialog showing a long operation progress and status, which might take from few minutes up to few hours. The problem is that the main form #0 is not responsive until you close all modal forms (#2 in the image). I need that the main form #0 would be operational in this situation. However, if you open a non-modal form in form #2, you can operate with both modal #2 form and newly created non modal form. I need the same behavior between the main form #0 and form #1 with all its child forms. Is it possible? Or am I doing something wrong? Maybe there is some kind of workaround, I really would not like to change all ShowDialog calls to ShowDialog...

    Read the article

  • Design PDF template and populate data at runtime using java,xml etc..

    - by Samant
    well i have been looking for a java based PDF solutions...we dont have a clean way i guess-still.. all solutions are primitive and kind of workarounds... No easy solution for this requirement - 1. Designing a PDF template using a IDE (eg. Livecycle designer ..which is not free) 2. Then at runtime using java, populate data into this PDF template...either using xml or other datasources... such a simple requirement and NONE has a good "open-source and free" solution yet ! Is anyone aware of any ? I have been searching for since 3-4 years now..for a clean way out... Eclipse BIRT comes close.. but does not handle Barcode elements ..OOB. Jasper - ireport is also good but that tool does not have a table concept and is kind of annoying ! Also barcode support is not good. XSL-FO has not free IDE for design . Looking for a better answer .. got one ?

    Read the article

  • Reloading the model of a TTTableViewController

    - by user341338
    My problem is that I have a Register Controller and a Login Controller. The Login Screen displays a Login Screen or a Logout Screen depending if a user is logged in. Now when a user registers, does not close the app, and then goes to the Login Screen it will still display a Login Screen, although there is a logged in user already. This is because the Screen is created when the application loads and does not change afterwards. I tried doing this: - (id)init { if (self = [super init]) { [self invalidateModel]; [self reload]; but that did not work, since it is only called on the first init. then i tried: - (void)viewDidLoad { [self invalidateModel]; [self reload]; } But that method had the same problem. Then I found this method: - (TTNavigationMode)navigationModeForURL:(NSString*)URL; with the following options: typedef enum { TTNavigationModeNone, TTNavigationModeCreate, // a new view controller is created each time TTNavigationModeShare, // a new view controller is created, cached and re-used TTNavigationModeModal, // a new view controller is created and presented modally TTNavigationModeExternal, // an external app will be opened } TTNavigationMode; It seems like TTNavigationModeCreate would be the right thing to use, but I have no clue how to use it. Any help? Thnx.

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • What's the best way to retrieve two pieces of data from an XML file?

    - by Morinar
    I've got an XML document that is in either a pre or post FO transformed state that I need to extract some information from. In the pre-case, I need to pull out two tags that represent the pageWidth and pageHeight and in the post case I need to extract the page-height and page-width parameters from a specific tag (I forget which one it is off the top of my head). What I'm looking for is an efficient/easily maintainable way to grab these two elements. I'd like to only read the document a single time fetching the two things I need. I initially started writing something that would use BufferedReader + FileReader, but then I'm doing string searching and it gets messy when the tags span multiple lines. I then looked at the DOMParser, which seems like it would be ideal, but I don't want to have to read the entire file into memory if I could help it as the files could potentially be large and the tags I'm looking for will nearly always be close to the top of the file. I then looked into SAXParser, but that seems like a big pile of complicated overkill for what I'm trying to accomplish. Anybody have any advice? Or simple implementations that would accomplish my goal? Thanks.

    Read the article

  • problem writing xml to file with .net mvc - timeout?

    - by Mark
    Hey, so having an issue with writing out to an xml file. Works fine for single requests via the browser, but when I use something like Charles to perform 5-10 repeated requests concurrently several of them will fail. The trace simply shows a 500 error with no content inside, basically I think they start timing out waiting for write access or something... This method is inside my repository class, have also attempted to have repository instance as a singleton but doesn't appear to make any difference.. Any help would be much appreciated. Cheers public void Add(Request request) { try { XDocument requests; XmlReader xmlReader; using (xmlReader = XmlReader.Create(_requestsFilePath)) { requests = XDocument.Load(xmlReader); XElement xmlRequest = new XElement("request", new XElement("code", request.code), new XElement("date", request.date), new XElement("email", new XCData(request.email)), new XElement("name", new XCData(request.name)), new XElement("recieveOffers", request.recieveOffers) ); requests.Root.Element("requests").Add(xmlRequest); xmlReader.Close(); } requests.Save(_requestsFilePath); } catch (Exception ex) { HttpContext.Current.Trace.Warn("Error writing to file: "+ex); } }

    Read the article

  • ASP.NET SqlDataReader throwing error: Invalid attempt to call Read when reader is closed.

    - by Bugget
    This one has me stumped. Here are the relative bits of code: public AgencyDetails(Guid AgencyId) { try { evgStoredProcedure Procedure = new evgStoredProcedure(); Hashtable commandParameters = new Hashtable(); commandParameters.Add("@AgencyId", AgencyId); SqlDataReader AppReader = Procedure.ExecuteReaderProcedure("evg_getAgencyDetails", commandParameters); commandParameters.Clear(); //The following line is where the error is thrown. Errormessage: Invalid attempt to call Read when reader is closed. while (AppReader.Read()) { AgencyName = AppReader.GetOrdinal("AgencyName").ToString(); AgencyAddress = AppReader.GetOrdinal("AgencyAddress").ToString(); AgencyCity = AppReader.GetOrdinal("AgencyCity").ToString(); AgencyState = AppReader.GetOrdinal("AgencyState").ToString(); AgencyZip = AppReader.GetOrdinal("AgencyZip").ToString(); AgencyPhone = AppReader.GetOrdinal("AgencyPhone").ToString(); AgencyFax = AppReader.GetOrdinal("AgencyFax").ToString(); } AppReader.Close(); AppReader.Dispose(); } catch (Exception ex) { throw new Exception("AgencyDetails Constructor: " + ex.Message.ToString()); } } And the implementation of ExecuteReaderProcedure: public SqlDataReader ExecuteReaderProcedure(string ProcedureName, Hashtable Parameters) { SqlDataReader returnReader; using (SqlConnection conn = new SqlConnection(connectionString)) { try { SqlCommand cmd = new SqlCommand(ProcedureName, conn); SqlParameter param = new SqlParameter(); cmd.CommandType = System.Data.CommandType.StoredProcedure; foreach (DictionaryEntry keyValue in Parameters) { cmd.Parameters.AddWithValue(keyValue.Key.ToString(), keyValue.Value); } conn.Open(); returnReader = cmd.ExecuteReader(CommandBehavior.CloseConnection); } catch (SqlException e) { throw new Exception(e.Message.ToString()); } } return returnReader; } The connection string is working as other stored procedures in the same class run fine. The only problem seems to be when returning SqlDataReaders from this method! They throw the error message in the title. Any ideas are greatly appreciated! Thanks in advance!

    Read the article

  • closing an unordered list

    - by snorpey
    On a website, I want to display the main navigation as an unordered list. After 3 items I want to close that list and open a new one, so it eventually looks something like this: <div id="navigation"> <ul> <li>1</li> <li>2</li> <li>3</li> </ul> <ul> <li>4</li> <li>5</li> <li>6</li> </ul> </div> The navigation is dynmically generated using jQuery + Ajax. This is what the code I'm using looks like: $.getJSON("load.php", { some: value }, function(data) { $.each(data.items, function(i, item) { $('#navigation').find('ul').append('<li>' + i + '</li>'); if(i % 3 == 0) { $('#navigation').find('ul').append('</ul><ul>'); } }); }); Unfortunately, the browser doesn't interpret this right and treats the closing ul tag as a nested object of the first ul. How do I fix this?

    Read the article

  • Caching issue with javascript and asp.net

    - by Ed Woodcock
    Hi guys: I asked a question a while back on here regarding caching data for a calendar/scheduling web app, and got some good responses. However, I have now decided to change my approach and stat caching the data in javascript. I am directly caching the HTML for each day's column in the calendar grid inside the $('body').data() object, which gives very fast page load times (almost unnoticable). However, problems start to arise when the user requests data that is not yet in the cache. This data is created by the server using an ajax call, so it's asynchronous, and takes about 0.2s per week's data. My current approach is simply to block for 0.5s when the user requests information from the server, and cache 4 weeks either side in the inital page load (and 1 extra week per page change request), however I doubt this is the optimal method. Does anyone have a suggestion as to how to improve the situation? To summarise: Each week takes 0.2s to retrieve from the server, asynchronously. Performance must be as close to real-time as possible. (however the data is not needed to be fully real-time: most appointments are added by the user and so we can re-cache after this) Currently 4 weeks are cached on either side of the inial week loaded: this is not enough. to cache 1 year takes ~ 21s, this is too slow for an initial load.

    Read the article

  • mysql/algorithm: Weighting an average to accentuate differences from the mean

    - by Sai Emrys
    This is for a new feature on http://cssfingerprint.com (see /about for general info). The feature looks up the sites you've visited in a database of site demographics, and tries to guess what your demographic stats are based on that. All my demgraphics are in 0..1 probability format, not ratios or absolute numbers or the like. Essentially, you have a large number of data points that each tend you towards their own demographics. However, just taking the average is poor, because it means that by adding in a lot of generic data, the number goes down. For example, suppose you've visited sites S0..S50. All except S0 are 48% female; S0 is 100% male. If I'm guessing your gender, I want to have a value close to 100%, not just the 49% that a straight average would give. Also, consider that most demographics (i.e. everything other than gender) does not have the average at 50%. For example, the average probability of having kids 0-17 is ~37%. The more a given site's demographics are different from this average (e.g. maybe it's a site for parents, or for child-free people), the more it should count in my guess of your status. What's the best way to calculate this? For extra credit: what's the best way to calculate this, that is also cheap & easy to do in mysql?

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >