Search Results

Search found 61110 results on 2445 pages for 'generation time'.

Page 395/2445 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • What are the most likely reasons an application would fail on only one of my servers?

    - by Rising Star
    I have several servers to test new code on. I primarily push out asp.NET web applications. Last week, I had an issue where I installed a newly developed web application on three servers. The three servers all run in separate environments. The application worked fine on two of them, but consistently crashed on the third server with each web request. The problem was eventually traced to an in-house developed .dll file being out of date on the third server. I'm certain that this kind of thing happens all the time. However, there are numerous things that could go wrong to cause this kind of behavior. I spent quite a bit of time tracing this problem. I would like to make a list of things to be suspicious of next time this happens? What are the most likely reasons that a web application would crash on one of my servers while identical code runs fine on another server.

    Read the article

  • c++ std::map question about iterator order

    - by jbu
    Hi all, I am a C++ newbie trying to use a map so I can get constant time lookups for the find() method. The problem is that when I use an iterator to go over the elements in the map, elements do not appear in the same order that they were placed in the map. Without maintaining another data structure, is there a way to achieve in order iteration while still retaining the constant time lookup ability? Please let me know. Thanks, jbu edit: thanks for letting me know map::find() isn't constant time.

    Read the article

  • Database indexes and their Big-O notation

    - by miket2e
    I'm trying to understand the performance of database indexes in terms of Big-O notation. Without knowing much about it, I would guess that: Querying on a primary key or unique index will give you a O(1) lookup time. Querying on a non-unique index will also give a O(1) time, albeit maybe the '1' is slower than for the unique index (?) Querying on a column without an index will give a O(N) lookup time (full table scan). Is this generally correct ? Will querying on a primary key ever give worse performance than O(1) ? My specific concern is for SQLite, but I'd be interested in knowing to what extent this varies between different databases too.

    Read the article

  • Strategy for developing a multi function asp.net web application

    - by user247023
    I'm about to start a new project and want some advice on how to implement. I need a web application which contains a booking module for reserving timeslots, and a time management module which will enable employees to clock in / clock out. If I am writing an update to the time managment module, I don't want to disrupt the booking engine availability by releasing a new solution containing both modules. to make things more difficult, there is some shared functionality like common users, roles and security. Here's a suggestion I've gotten, which sounds a bit cruddy, but may be functional. Write a 'container' web application which consists of basically a frame, and authentication / security features. This then has links which, will load the 2 independantly built and released web applications into the frame. I can see that say, if I wanted to update the time management module, I would only need to build and release this separately, and the rest of the solution would be 'untouched' Any better alternatives?

    Read the article

  • Best way to store weekly event in MySQL?

    - by mazin k.
    I have a table of weekly events that run on certain days of the week (e.g. MTWTh, MWF, etc.) and run on a certain time (e.g. 8am-5pm). What's the best way to store day of week information in MySQL to make retrieving and working with the data easiest? My CakePHP app is going to need to retrieve all events happening NOW(). For time of day, I would just use TIME. For days of the week, I had considered a 7-bit bitfield, a varchar ("MTWThFr" type deal) for the days of the week, but both of those seem like clunky solutions (the varchar being clunkier). Any suggestions?

    Read the article

  • Design Solution For Storing-Fetching Images

    - by Chaitanya
    This is a design doubt am facing, I have a collection of 1500 images which are to be displayed on an asp.net page, the images to be displayed differ from one page to another, the count of these images will increase in the time to come, a.) is it a good idea to have the images on the database, but the round trip time to fetch the images from the database might be high. b.) is it good to have all the images on a directory, and have a virtual file system over it, and the application will access the images from the directory Do we have in particular any design strategy in a traditional database for fetching images with the least round trip time, does any solution other than usage of a traditional database exists? ps: I use SQL Server to store these images.

    Read the article

  • Queue remote calls to a Python Twisted perspective broker?

    - by agartland
    The strength of Twisted (for python) is its asynchronous framework (I think). I've written an image processing server that takes requests via Perspective Broker. It works great as long as I feed it less than a couple hundred images at a time. However, sometimes it gets spiked with hundreds of images at virtually the same time. Because it tries to process them all concurrently the server crashes. As a solution I'd like to queue up the remote_calls on the server so that it only processes ~100 images at a time. It seems like this might be something that Twisted already does, but I can't seem to find it. Any ideas on how to start implementing this? A point in the right direction? Thanks!

    Read the article

  • What to do with a big image that's slowing website loading down significantly

    - by Dave
    Hi I'm working on a website that's already been designed by someone else. The designer has used a big image (900x700 100KB) which contains a big logo right across the top, then the background for two columns. This image loads every time a page is loaded as it forms the basis for the website. What should I do with it to improve loading time? I'm considering splitting it up into two or more images, especially the logo on the top. Does splitting up images like that decrease loading time in any significant way? Thanks -edit: Also, all the images are .jpg, would changing this to .gif or .png help anything?

    Read the article

  • Memory leak with ContextMenuStrip

    - by Dave
    I'm creating a lot of custom controls and adding them to a FlowLayoutPanel. There is also a ContextMenuStrip created and populated at design time. Every time a control is added to the panel it has its ContextMenuStrip property assigned to this menu, so that all controls "share" the same menu. But I noticed when the controls are removed from the panel and disposed of, the memory in use in Task Manager doesn't drop. It rises around 50kB every time a control is created and added to the layout panel. I downloaded the trial of .NET Memory Profiler and it showed there were references to the menu strip hanging around after the controls were disposed. I changed the code to explicitly set the ContextMenuStrip property to null before disposing of the control, and yep, the memory is now released. Why is this? Shouldn't the GC clean up this type of thing?

    Read the article

  • Incrmenting a Cookies with PHP (Beginner Question)

    - by BandonRandon
    Hello, I have used sessions before but never cookies. I would like to use cookies for two reasons: 1) it's something new to learn 2) I would like to have the cookie expire in an hour or so (i know in the code example it expires in 40 sec) I am trying to write a basic if statement that if($counter=="1") { //do this second } elseif ($counter >="2") { //do this every time after the first and second } else {// this is the first action as counter is zero } Here is the code I'm using to set the cookie: // if cookie doesnt exsist, set the default if(!isset($_COOKIE["counter_cookie"])) { $counter = setcookie("counter_cookie", 0 ,time()+40); } // increment it $counter++; // save it setcookie("counter_cookie", $counter,time()+40); $counter = $_COOKIE["counter_cookie"]; The problem is that the counter will be set from 0 to 1 but won't be set from 1 to 2 and so on. Any help would be great I know this is a really simple stupid question :| Thanks!

    Read the article

  • Portable way to determine the platform's line separator

    - by Adrian McCarthy
    Different platforms use different line separator schemes (LF, CR-LF, CR, NEL, Unicode LINE SEPARATOR, etc.). C++ (and C) make a lot of this transparent to most programs, by converting '\n' to and from the target platform's native new line encoding. But if your program needs to determine the actual byte sequence used, how could you do it portably? The best method I've come up with is: Write a temporary file in text mode with just '\n' in it, letting the run-time do the translation. Read back the temporary file in binary mode to see the actual bytes. That feels kludgy. Is there a way to do it without temporary files? I tried stringstreams instead, but the run-time doesn't actually translate '\n' in that context (which makes sense). Does the run-time expose this information in some other way?

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • How to keep Hibernate mapping use under control as requirements grow

    - by David Plumpton
    I've worked on a number of Java web apps where persistence is via Hibernate, and we start off with some central class (e.g. an insurance application) without any time being spent considering how to break things up into manageable chunks. Over time as features are added we add more mappings (rates, clients, addresses, etc.) and then amount of time spent saving and loading an insurance object and everything it connects to grows. In particular you get close to a go-live date and performance testing with larger amounts of data in each table is starting to demonstrate that it's all too slow. Obviously there are a number of ways that we could attempt to partition things up, e.g. map only the client classes for the client CRUD screens, etc., which would have been better to get in place earlier rather than trying to work it in at the end of the dev cycle. I'm just wondering if there are recommendations about ways to handle/mitigate this.

    Read the article

  • Integrate ClickOnce update in a setup

    - by Erick
    As recent decision in my team we decided to deploy a software we develop for some time now in ClickOnce. Previously it was deployed with a merge module in a setup project. We had in mind to actually deploy the ClickOnce setup/updater in the web server setup. The problem here would not exactly to integrate it but to make sure to limit the human work to integrate it. Normally I guess it would be necessary to "add - file" and select the folder containing the setup.exe + publish.htm + application files + .application, but I fear that for each time we publish a new version to the hard drive we have to update the setup project as well. Would someone have some insight to help that ? (especially to not have to add the program_[version] folder inside application files that is created each time a new publish is done).

    Read the article

  • Should I continue using R v2.8.1 ?

    - by Mehper C. Palavuzlar
    I've been using R v2.8.1 for a long time. Normally I would upgrade it to the latest version but something keeps me away from the builds later than 2.8.1: I use read.table(file=file.choose(),header=TRUE) frequently in my libraries. After upgrading to 2.9.0, R started not to remember the latest directory used while selecting file. I downgraded to 2.8.1 and now R can remember again the last directory used. I don't know why they changed that behavior in this direction but this is absolutely crucial for me. It wastes my time in v2.9.0 every time I try to find a specific directory when R cannot remember it. Now R 2.10.1 is released. I don't know if they have corrected this issue. Should I upgrade or is it just enough to continue using v2.8.1? Will I miss something if I stick at 2.8.1?

    Read the article

  • Is there any reason why someone would want to create an Core Data model programmatically?

    - by mystify
    I wonder in which cases it would be good to make an NSManagedObjectModel completely programmatically, with NSEntityDescription instances and all this stuff. I'm that kind of person who prefers to code programmatically, rejecting Interface Builder. But when it comes to Core Data, I have a hard time figuring out why I should kill my time NOT using the nice Xcode Data Modeler tool. And since data models are stuck to a given state (except when you want to do some ugly migration operations where thinks probably go wrong and users get mad, really mad), I see no big sense in a data model that's made programmatically for the purpose of changing it all the time. Did I miss something?

    Read the article

  • click multiple times a submit button

    - by Olga Anastasiadou
    Hi all, I encountered a problem that I can't solve. My point is to "click" my sybmit button and every time increase a counter, while this counter reach 10. For the first time it works, but this is it! My test code is below : <form name="testForm" method="post"> <?php $cnt=0; ?> <input type="submit" name="next" id="next" value="NEXT"/> <?php if(isset($_POST['next'])){ if($cnt< 10){ echo $cnt.' --> '; $cnt++; echo $cnt; } } ?> </form> Only 0 -- 1 is printed, every time... please help!! Thanks

    Read the article

  • How do you map a composite id to a composite user type with Fluent NHibernate?

    - by gabe
    i'm working w/ a legacy database is set-up stupidly with an index composed of a char id column and two char columns which make up a date and time, respectively. I have created a icompositeusertype for the date and time columns to map to a single .NET DateTime property on my entity, which works by itself when not part of the id. i need to somehow use a composite id with key property mapping that includes my icompositeusertype for mapping to the two char date and time columns. Apparently w/ my version of Fluent NHibernate, CompositeIdentityPart doesn't have a CustomTypeIs() method, so i can't just do the following in my override: mapping.UseCompositeId() .WithKeyProperty(x => x.Id, CommonDatabaseFieldNames.Id) .WithKeyProperty(x => x.FileCreationDateTime) .CustomTypeIs<FileCreationDateTimeType>(); is something like this even possible w/ NHibernate let alone Fluent? I haven't been able to find anything on this.

    Read the article

  • Does normalization really hurt performance in high traffic sites?

    - by Luke101
    I am designing a database and I would like to normalize the database. I one query I will joining about 30-40 tables. Will this hurt the website performance if it ever becomes extremely popular? This will be the main query and it will be getting called 50% of the time. The other queries I will be joining about 2 tables. I have a choice right now to normalize or not to normalize but if the normalization becomes a problem in the future i may have to rewrite 40% of the software and it may take me a long time. Does normalization really hurt in this case? Should I denormalize now while I have the time?

    Read the article

  • How to get JSON objects value if it's name contains dots?

    - by manakor
    I have a very simple JSON array (please focus on "points.bean.pointsBase" object): var mydata = {"list": [ {"points.bean.pointsBase": [ {"time": 2000, "caption":"caption text", duration: 5000}, {"time": 6000, "caption":"caption text", duration: 3000} ] } ] }; // Usually we make smth like this to get the value: var smth = mydata.list[0].points.bean.pointsBase[0].time; alert(smth); // should display 2000 But, unfortunately, it does display nothing. When I change "points.bean.pointsBase" to smth without dots in it's name - everything works! However, I can't change this name to anything else without dots, but I need to get a value?! Is there any options to get it?

    Read the article

  • Very Different Execution Times of SQL Query in C# and SQL Server Management Studio

    - by Paul
    I have a simple SQL query that when run from C# takes over 30 seconds then times-out every time, whereas when run on SQL Server Management Studio successfully completes instantly. In the latter case, a query execution plan reveals nothing troubling, and the execution time is spread nicely through a few simple operations. I've run 'EXEC sp_who2' while the query is running from C#, and it is listed as taking 29,000 milliseconds of CPU time, and is not blocked by anything. I have no idea how to begin solving this. Does anyone have some insight? The query is: SELECT c.lngId, ... FROM tblCase c INNER JOIN tblCaseStatus s ON s.lngId = c.lngId INNER JOIN tblCaseStatusType t ON t.lngId = s.lngId INNER JOIN [Another Database]..tblCompany cm ON cm.lngId = cs.lngCompanyId WHERE t.lngId = 25 AND c.IsDeleted = 0 AND s.lngStatus = 1

    Read the article

  • MySQL Join issue

    - by mouthpiec
    Hi, I have the following tables: --table sportactivity-- sport_activity_id, home_team_fk, away_team_fk, competition_id_fk, date, time (tuple example) - 1, 33, 41, 5, 2010-04-14, 05:40:00 --table teams-- team_id, team_name (tuple example) - 1, Algeria Now I have the following SQL statment that I use to extract Team A vs Team B SELECT sport_activity_id, T1.team_name AS TeamA, T2.team_name AS TeamB, DATE_FORMAT( DATE, '%d/%m/%Y' ) AS DATE, DATE_FORMAT( TIME, '%H:%i' ) AS TIME FROM sportactivity JOIN teams T1 ON home_team_fk = T1.team_id JOIN teams T2 ON ( away_team_fk = T2.team_id OR away_team_fk = '0' ) WHERE DATE( DATE ) >= CURDATE( ) ORDER BY DATE( DATE ) My problem is that when team B is empty, I am having irrelevant information .... it seems that it is returning all the combinations. I need a query that when team B is equal to 0, (this can occur in my scenario) I get only Team A - Team B (as 0) once.

    Read the article

  • Organization moving from ASP.NET to WebLogic - training recommendations?

    - by frankadelic
    My organization previously used ASP.NET for web projects. They are now migrating to WebLogic/JEE for all future web projects. My experience as a Lead Developer / Architect is totally with .NET projects. I want to get ramped up on WebLogic/JEE, so I can contribute to future projects. Any training/certification suggestions for WebLogic/JEE, in a 6-month time frame? Assume that I will need to fund my own training, and I am working full time. So, money and time are limited.

    Read the article

  • overview/history of resident memory usage

    - by kapet
    I have a fairly complicated program (Python with SWIG'ed C++ code, long running server) that shows a constantly growing resident memory usage. I've been digging with the usual tools for the leak (valgrind, Pythons gc module, etc.) but to no avail so far. I'm a bit afraid that the actual problem is memory fragmentation within Python and/or libc managed memory. Anyway, my question is more specific right now: Is there a tool to visualize resident memory usage and ideally show how it develops over time? I think the raw data is in /proc/$PID/smaps but I was hoping there's some tool that shows me a nice graph of the amounts used by mmap'ed files vs. anonymous mmap'ed memory vs. heap over time so that it's easier to see (literally) what's changing. I couldn't find anything though. Does anybody know of a ready to use tool that graphs memory usage over space and time in an intuitive way?

    Read the article

  • NSDateFormatter Puzzle! Is this is new Mountain Lion Bug?

    - by user1631526
    I believe I might have found a bug in NSDateFormatter, and I am not sure if this is localized to Brazil date format. This is my resulting log: 2012-08-26 01:32:17.875 nsDateFormatterPuzzle[3261:303] original item # 0 = 2011-09-18 2012-08-26 01:32:17.876 nsDateFormatterPuzzle[3261:303] original item # 1 = 2011-10-16 2012-08-26 01:32:17.876 nsDateFormatterPuzzle[3261:303] original item # 2 = 2011-11-13 2012-08-26 01:32:17.877 nsDateFormatterPuzzle[3261:303] original item # 3 = 2011-12-11 2012-08-26 01:32:17.877 nsDateFormatterPuzzle[3261:303] original item # 4 = 2012-01-08 2012-08-26 01:32:17.877 nsDateFormatterPuzzle[3261:303] original item # 5 = 2012-02-05 2012-08-26 01:32:17.883 nsDateFormatterPuzzle[3261:303] formatted item # 0 = 18/09/2011 2012-08-26 01:32:17.884 nsDateFormatterPuzzle[3261:303] formatted item # 1 = (null) 2012-08-26 01:32:17.885 nsDateFormatterPuzzle[3261:303] formatted item # 2 = 13/11/2011 2012-08-26 01:32:17.886 nsDateFormatterPuzzle[3261:303] formatted item # 3 = 11/12/2011 2012-08-26 01:32:17.887 nsDateFormatterPuzzle[3261:303] formatted item # 4 = 08/01/2012 2012-08-26 01:32:17.887 nsDateFormatterPuzzle[3261:303] formatted item # 5 = 05/02/2012 Note that if you go to System Preferences -- Date and Time -- Time Zone and change your Time Zone to Rio de Janeiro, you will have the same results.

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >