Search Results

Search found 17686 results on 708 pages for 'high level'.

Page 57/708 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Django admin causes high load for one model...

    - by Joe
    In my Django admin, when I try to view/edit objects from one particular model class the memory usage and CPU rockets up and I have to restart the server. I can view the list of objects fine, but the problem comes when I click on one of the objects. Other models are fine. Working with the object in code (i.e. creating and displaying) is ok, the problem only arises when I try to view an object with the admin interface. The class isn't even particularly exotic: class Comment(models.Model): user = models.ForeignKey(User) thing = models.ForeignKey(Thing) date = models.DateTimeField(auto_now_add=True) content = models.TextField(blank=True, null=True) approved = models.BooleanField(default=True) class Meta: ordering = ['-date'] Any ideas? I'm stumped. The only reason I could think of might be that the thing is quite a large object (a few kb), but as I understand it, it wouldn't get loaded until it was needed (correct?).

    Read the article

  • Entry Level Programming Jobs with Applied Math Degree

    - by Mark
    I am about to finish my B.Sc. in Applied Math. I started out in CS a few years back had a bit of a change of heart and decided to go the math route. Now that I am looking for career options finishing up and I'm just wonder how my Applied Math degree will look when applying for programming jobs. I have taken CS courses in C++/Java/C and done 2 semester of Scientific Computing with MATLAB/Mathematica and the like, so I feel like i at least know how to program. Of course I am lacking some of the theoretical courses on the CS. I'd very much like to know how I stack up for a programming job as a math major. Thanks.

    Read the article

  • Perform Grouping of Resultsets in Code, not on Database Level

    - by NinjaBomb
    Stackoverflowers, I have a resultset from a SQL query in the form of: Category Column2 Column3 A 2 3.50 A 3 2 B 3 2 B 1 5 ... I need to group the resultset based on the Category column and sum the values for Column2 and Column3. I have to do it in code because I cannot perform the grouping in the SQL query that gets the data due to the complexity of the query (long story). This grouped data will then be displayed in a table. I have it working for specific set of values in the Category column, but I would like a solution that would handle any possible values that appear in the Category column. I know there has to be a straightforward, efficient way to do it but I cannot wrap my head around it right now. How would you accomplish it? EDIT I have attempted to group the result in SQL using the exact same grouping query suggested by Thomas Levesque and both times our entire RDBMS crashed trying to process the query. I was under the impression that Linq was not available until .NET 3.5. This is a .NET 2.0 web application so I did not think it was an option. Am I wrong in thinking that? EDIT Starting a bounty because I believe this would be a good technique to have in the toolbox to use no matter where the different resultsets are coming from. I believe knowing the most concise way to group any 2 somewhat similar sets of data in code (without .NET LINQ) would be beneficial to more people than just me.

    Read the article

  • Changing API level Android Studio

    - by James B
    I want to change the minimum SDK version in Android Studio from API 12 to API 14. I have tried changing it in the manifest file, i.e., <uses-sdk android:minSdkVersion="14" android:targetSdkVersion="18" /> and rebuilding the project, but I still get the Android Studio IDE throwing up some errors. I presume I have to set the min SDK in 'project properties' or something similar so the IDE recognises the change, but I can't find where this is done in Android Studio. Pointers greatly appreciated. Thnx.

    Read the article

  • High performance text file parsing in .net

    - by diamandiev
    Here is the situation: I am making a small prog to parse server log files. I tested it with a log file with several thousand requests (between 10000 - 20000 don't know exactly) What i have to do is to load the log text files into memory so that i can query them. This is taking the most resources. The methods that take the most cpu time are those (worst culprits first): string.split - splits the line values into a array of values string.contains - checking if the user agent contains a specific agent string. (determine browser ID) string.tolower - various purposes streamreader.readline - to read the log file line by line. string.startswith - determine if line is a column definition line or a line with values there were some others that i was able to replace. For example the dictionary getter was taking lots of resources too. Which i had not expected since its a dictionary and should have its keys indexed. I replaced it with a multidimensional array and saved some cpu time. Now i am running on a fast dual core and the total time it takes to load the file i mentioned is about 1 sec. Now this is really bad. Imagine a site that has tens of thousands of visits a day. It's going to take minutes to load the log file. So what are my alternatives? If any, cause i think this is just a .net limitation and i can't do much about it.

    Read the article

  • WPF - DataTemplate to show only one hierarchical level in recursive data

    - by Paull
    Hi all, I am using a tree to display my data: persons grouped by their teams. In my model a team can contain another team, so the recursion. I want do display details about the selected node of the tree using a contentpresenter. If the selection is a person, everything is fine: I can show the person name or datails without problem using a simple datatemplate. If the selection is a team I would like to display the team name followed by a list of member names. If one of these members is another team I would like to display just the team name, without recursion... My code here is wrong because it displays data in a recursive way, what is the right way of doing it? Thanks in advance for any help! best regards, Paolo <ContentPresenter Content="{Binding Path=SelectedItem, ElementName=PeopleTree}" > <ContentPresenter.Resources> <DataTemplate DataType="{x:Type my:PersonViewModel}"> <TextBlock Text="{Binding PersonName}"/> </DataTemplate> <DataTemplate DataType="{x:Type my:TeamViewModel}"> <StackPanel> <TextBlock Text="{Binding TeamName}" /> <ListBox ItemsSource="{Binding Members}" /> </StackPanel> </DataTemplate> </ContentPresenter.Resources> </ContentPresenter>

    Read the article

  • Quickest algorithm for finding sets with high intersection

    - by conradlee
    I have a large number of user IDs (integers), potentially millions. These users all belong to various groups (sets of integers), such that there are on the order of 10 million groups. To simplify my example and get to the essence of it, let's assume that all groups contain 20 user IDs (i.e., all integer sets have a cardinality of 100). I want to find all pairs of integer sets that have an intersection of 15 or greater. Should I compare every pair of sets? (If I keep a data structure that maps userIDs to set membership, this would not be necessary.) What is the quickest way to do this? That is, what should my underlying data structure be for representing the integer sets? Sorted sets, unsorted---can hashing somehow help? And what algorithm should I use to compute set intersection)? I prefer answers that relate C/C++ (especially STL), but also any more general, algorithmic insights are welcome. Update Also, note that I will be running this in parallel in a shared memory environment, so ideas that cleanly extend to a parallel solution are preferred.

    Read the article

  • Usage of Maven (and open source in general) in high governance and risk-averse large organizations (

    - by bart
    Does anyone have any good stories of these kinds of organizations being open to using open source (such as tools like Maven etc). Many staff I've encountered have little or no exposure to open source/systems and open source is treated with great suspicion. Some reasons given for this are lack of support and robustness, which is ironic given the number of end-of-life unsupported vendor products that are in production. Bonus points for any success stories where you've seen open source go into orgs like this and have a real benefit!

    Read the article

  • Storing high precision latitude/longitude numbers in iOS Core Data

    - by Bryan
    I'm trying to store Latitude/Longitudes in core data. These end up being anywhere from 6-20 digit precision. And for whatever reason, i had them as floats in Core Data, its rounding them and not giving me the exact values back. I tried "decimal" type, with no luck either. Are NSStrings my only other option? EDIT NSManagedObject: @interface Event : NSManagedObject { } @property (nonatomic, retain) NSDecimalNumber * dec; @property (nonatomic, retain) NSDate * timeStamp; @property (nonatomic, retain) NSNumber * flo; @property (nonatomic, retain) NSNumber * doub; Here's the code for a sample number that I store into core data: NSNumber *n = [NSDecimalNumber decimalNumberWithString:@"-97.12345678901234567890123456789"]; Code to access it again: NSNumber *n = [managedObject valueForKey:@"dec"]; NSNumber *f = [managedObject valueForKey:@"flo"]; NSNumber *d = [managedObject valueForKey:@"doub"]; Printed values: Printing description of n: -97.1234567890124 Printing description of f: <CFNumber 0x603f250 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type} Printing description of d: <CFNumber 0x6040310 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type}

    Read the article

  • Important question about linq to SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Recommendations of a high volume log event viewer in a Java enviroment

    - by Thorbjørn Ravn Andersen
    I am in a situation where I would like to accept a LOT of log events controlled by me - notably the logging agent I am preparing for slf4j - and then analyze them interactively. I am not as such interested in a facility that presents formatted log files, but one that can accept log events as objects and allow me to sort and display on e.g. threads and timelines etc. Chainsaw could maybe be an option but is currently not compatible with logback which I use for technical reasons. Is there any project with stand alone viewers or embedded in an IDE which would be suitable for this kind of log handling. I am aware that I am approaching what might be suitable for a profiler, so if there is a profiler projekt suitable for this kind of data acquisition and display where I can feed the event pipe, I would like to hear about it). Thanks for all feedback Update 2009-03-19: I have found that there is not a log viewer which allows me to see what I would like (a visual display of events with coordinates determined by day and time, etc), so I have decided to create a very terse XML format derived from the log4j XMLLayout adapted to be as readable as possible while still being valid XML-snippets, and then use the Microsoft LogParser to extract the information I need for postprocessing in other tools.

    Read the article

  • MsSql Server high Resource Waits and Head Blocker

    - by MartinHN
    Hi I have a MS SQL Server 2008 Standard installation running a database for a webshop. The current size of the database is 2.5 GB. Running on Windows 2008 Standard. Dual Intel Xeon X5355 @ 2.00 GHz. 4 GB RAM. When I open the Activity Monitor, I see that I have a Wait Time (ms/sec) of 5000 in the "Other" category. In the Processes list, all connections from the webshop, the Head Blocker value is 1. I see every day that when I try to access the website, it can take 20-30 secs before it even starts to "work". I know that it is not network latency. (I have a 301 redirect from the same server that is executed instantly). When the first request has been served, it seems as if it's not a sleep anymore and every subsequent request is served instantly with the speed of light. The problem was worse two weeks ago, until I changed every query to include WITH (NOLOCK). But I still experience the problem, and the Wait times in the Activity Monitor is about the same. The largest table (Images) has 32764 rows (448576 KB). Some tables exceed 300000 rows, thought they're much smaller in size than the Images table. I have the default clustered index for every primary key column, only. Any ideas?

    Read the article

  • [rails] do we need database level constraints

    - by shrimpy
    i have the same problem as in the following post http://stackoverflow.com/questions/1451570/ruby-on-rails-database-migration-not-creating-foreign-keys-in-mysql-tables so i am wondering, why rails do not support generate foreign key by default??? it is not necessary??? or we suppose to do it manually?

    Read the article

  • Zoom image to pixel level

    - by zaf
    For an art project, one of the things I'll be doing is zooming in on an image to a particular pixel. I've been rubbing my chin and would love some advice on how to proceed. Here are the input parameters: Screen: sw - screen width sh - screen height Image: iw - image width ih - image height Pixel: px - x position of pixel in image py - y position of pixel in image Zoom: zf - zoom factor (0.0 to 1.0) Background colour: bc - background colour to use when screen and image aspect ratios are different Outputs: The zoomed image (no anti-aliasing) The screen position/dimensions of the pixel we are zooming to. When zf is 0 the image must fit the screen with correct aspect ratio. When zf is 1 the selected pixel fits the screen with correct aspect ratio. One idea I had was to use something like povray and move the camera towards a big image texture or some library (e.g. pygame) to do the zooming. Anyone think of something more clever with simple pseudo code? To keep it more simple you can make the image and screen have the same aspect ratio. I can live with that. I'll update with more info as its required.

    Read the article

  • Large Y-axis tickInterval in high charts does not work

    - by ckovacs
    I have a chart at this JSFiddle to demonstrate a problem where our charts are not respecting the y-axis tick interval for large values: http://jsfiddle.net/z2cDu/1/ var plots = {"usBytePlots":[[1362009600000,143663192997],[1362096000000,110184848742],[1362182400000,97694974247],[1362268800000,90764690805],[1362355200000,112436517747],[1362441600000,113563368701],[1362528000000,139579327454],[1362614400000,118406594506],[1362700800000,125366899935],[1362787200000,134189435596],[1362873600000,132873135854],[1362960000000,121002328604],[1363046400000,123138222001],[1363132800000,115667785553],[1363219200000,103746172138],[1363305600000,108602633473],[1363392000000,89133998142],[1363478400000,92170701458],[1363564800000,86696922873],[1363651200000,80980159054],[1363737600000,97604615694],[1363824000000,108011666339],[1363910400000,124419138381],[1363996800000,121704988344],[1364083200000,124337959109],[1364169600000,137495512348],[1364256000000,136017103319],[1364342400000,60867510427]],"dsBytePlots":[[1362009600000,1734982247336],[1362096000000,1471928923201],[1362182400000,1453869593201],[1362268800000,1411787942581],[1362355200000,1460252447519],[1362441600000,1595590020177],[1362528000000,1658007074783],[1362614400000,1411941908699],[1362700800000,1447659369450],[1362787200000,1643008799861],[1362873600000,1792357973023],[1362960000000,1575173242169],[1363046400000,1565139003978],[1363132800000,1549211975554],[1363219200000,1438411448469],[1363305600000,1380445413578],[1363392000000,1298319283929],[1363478400000,1194578344720],[1363564800000,1211409679299],[1363651200000,1142416351471],[1363737600000,1223822672626],[1363824000000,1267692136487],[1363910400000,1384335759541],[1363996800000,1577205919828],[1364083200000,1675715948928],[1364169600000,1517593781592],[1364256000000,1562183018457],[1364342400000,681007264598]],"aggregatedTotalBytes":43476367948896,"aggregatedUsBytes":3150320403841,"aggregatedDsBytes":40326047545055,"maxTotalBytes":328186292129,"maxTotalBitsPerSecond":30387619.641574074} ; $('#container').highcharts({ yAxis: { tickInterval: 53687091200 // 500 gigabytes. Maximum y-axis value is approx 1.8TB }, series : [ { color: 'rgba(80, 180, 77, 0.7)', type: 'areaspline', name : 'Downstream', data : plots.dsBytePlots, total: plots.aggregatedDsBytes }, { color: 'rgba(33, 143, 197, 0.7)', type: 'areaspline', name : 'Upstream', data : plots.usBytePlots, total: plots.aggregatedUsBytes }] }); In this example we are charting bandwidth utilization in bytes. The chart has a maximum value of about 1.8TB. We set the y-axis tick interval to exactly 500GB but the rendered y-axis ticks don't make any sense for the given interval.

    Read the article

  • Pros and Cons on where to place business logic: app level or DB

    - by Juri
    Hi, I always again encounter discussions about where to place the business logic: inside a business layer in the application code or down in the DB in terms of stored procedures. Personally I'd tend to the 1st approach, but I'd like to hear some opinions from your part first, without influencing you with my personal views. I know there doesn't exist a one-size-fits-all solution and it often depends on many factors, but we can discuss about that. Btw, we are in the context of web applications and our current approach is to have UI layer which accepts UI input and does a first, client-side validation Business layer with a number of service-classes which contains the business logic including validation for user input (server-side) Data Access Layer which calls stored procedures from the DB for doing persistency/read operations Many people however tend to move the business layer stuff (especially regarding the validation) down to the DB in terms of stored procedures. What do you think about it? I'd like to discuss.

    Read the article

  • memory usage in C# (.NET) app is very high, until I call System.GC.Collect()

    - by Chris Gray
    I've written an app that spins a few threads each of which read several MB of memory. Each thread then connects to the Internet and uploads the data. this occurs thousands of times and each upload takes some time I'm seeing a situation where (verified with windbg/sos and !dumpheap) that the Byte[] are not getting collected automatically, causing 100/150MB of memory to be reported in task manager if I call System.GC.Collect() i'm seeing a huge drop in memory, a drop of over 100MB I dont like calling System.GC.Collect() and my PC has tons of free memory. however if anyone looks at TaskManager they're going to be concerned, thinking my app is leaking horribly. tips?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >