Search Results

Search found 2208 results on 89 pages for 'boost signals'.

Page 82/89 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Build OpenGL model in parallel?

    - by Brendan Long
    I have a program which draws some terrain and simulates water flowing over it (in a cheap and easy way). Updating the water was easy to parallelize using OpenMP, so I can do ~50 updates per second. The problem is that even with a small amounts of water, my draws per second are very very low (starts at 5 and drops to around 2 once there's a significant amount of water). It's not a problem with the video card because the terrain is more complicated and gets drawn so quickly that boost::timer tells me that I get infinity draws per second if I turn the water off. It may be related to memory bandwidth though (since I assume the model stays on the card and doesn't have to be transfered every time). What I'm concerned about is that on every draw, I'm calling glVertex3f() about a million times (max size is 450*600, 4 vertices each), and it's done entirely sequentially because Glut won't let me call anything in parallel. So.. is if there's some way of building the list in parallel and then passing it to OpenGL all at once? Or some other way of making it draw this faster? Am I using the wrong method (besides the obvious "use less vertices")?

    Read the article

  • How can I write a function template for all types with a particular type trait?

    - by TC
    Consider the following example: struct Scanner { template <typename T> T get(); }; template <> string Scanner::get() { return string("string"); } template <> int Scanner::get() { return 10; } int main() { Scanner scanner; string s = scanner.get<string>(); int i = scanner.get<int>(); } The Scanner class is used to extract tokens from some source. The above code works fine, but fails when I try to get other integral types like a char or an unsigned int. The code to read these types is exactly the same as the code to read an int. I could just duplicate the code for all other integral types I'd like to read, but I'd rather define one function template for all integral types. I've tried the following: struct Scanner { template <typename T> typename enable_if<boost::is_integral<T>, T>::type get(); }; Which works like a charm, but I am unsure how to get Scanner::get<string>() to function again. So, how can I write code so that I can do scanner.get<string>() and scanner.get<any integral type>() and have a single definition to read all integral types? Update: bonus question: What if I want to accept more than one range of classes based on some traits? For example: how should I approach this problem if I want to have three get functions that accept (i) integral types (ii) floating point types (iii) strings, respectively.

    Read the article

  • How string accepting interface should look like?

    - by ybungalobill
    Hello, This is a follow up of this question. Suppose I write a C++ interface that accepts or returns a const string. I can use a const char* zero-terminated string: void f(const char* str); // (1) The other way would be to use an std::string: void f(const string& str); // (2) It's also possible to write an overload and accept both: void f(const char* str); // (3) void f(const string& str); Or even a template in conjunction with boost string algorithms: template<class Range> void f(const Range& str); // (4) My thoughts are: (1) is not C++ish and may be less efficient when subsequent operations may need to know the string length. (2) is bad because now f("long very long C string"); invokes a construction of std::string which involves a heap allocation. If f uses that string just to pass it to some low-level interface that expects a C-string (like fopen) then it is just a waste of resources. (3) causes code duplication. Although one f can call the other depending on what is the most efficient implementation. However we can't overload based on return type, like in case of std::exception::what() that returns a const char*. (4) doesn't work with separate compilation and may cause even larger code bloat. Choosing between (1) and (2) based on what's needed by the implementation is, well, leaking an implementation detail to the interface. The question is: what is the preffered way? Is there any single guideline I can follow? What's your experience?

    Read the article

  • Is it possible to cache all the data in a SQL Server CE database using LinqToSql?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • Portable way of counting milliseconds in C++ ?

    - by ereOn
    Hi, Is there any portable (Windows & Linux) way of counting how many milliseconds elapsed between two calls ? Basically, I want to achieve the same functionnality than the StopWatch class of .NET. (for those who already used it) In a perfect world, I would have used boost::date_time but that's not an option here due to some silly rules I'm enforced to respect. For those who better read code, this is what I'd like to achieve. Timer timer; timer.start(); // Some instructions here timer.stop(); // Print out the elapsed time std::cout << "Elapsed time: " << timer.milliseconds() << "ms" << std::endl; So, if there is a portable (set of) function(s) that can help me implement the Timer class, what is it ? If there is no such function, what Windows & Linux API should I use to achieve this functionnality ? (using #ifdef WINDOWS-like macros) Thanks !

    Read the article

  • What to throw in a C++ class wrapping a C library ?

    - by ereOn
    I have to create a set of wrapping C++ classes around an existing C library. For many objects of the C library, the construction is done by calling something like britney_spears* create_britney_spears() and the opposite function void free_britney_spears(britney_spears* brit). If the allocation of a britney_spears fails, create_britney_spears() returns NULL. This is, as far as I know, a very common pattern. Now I want to wrap this inside a C++ class. //britney_spears.hpp class BritneySpears { public: BritneySpears(); private: boost::shared_ptr<britney_spears> m_britney_spears; }; And here is the implementation: // britney_spears.cpp BritneySpears::BritneySpears() : m_britney_spears(create_britney_spears(), free_britney_spears) { if (!m_britney_spears) { // Here I should throw something to abort the construction, but what ??! } } So the question is in the code sample: What should I throw to abort the constructor ? I know I can throw almost anything, but I want to know what is usually done. I have no other information about why the allocation failed. Should I create my own exception class ? Is there a std exception for such cases ? Many thanks.

    Read the article

  • C Population Count of unsigned 64-bit integer with a maximum value of 15

    - by BitTwiddler1011
    I use a population count (hamming weight) function intensively in a windows c application and have to optimize it as much as possible in order to boost performance. More than half the cases where I use the function I only need to know the value to a maximum of 15. The software will run on a wide range of processors, both old and new. I already make use of the POPCNT instruction when Intel's SSE4.2 or AMD's SSE4a is present, but would like to optimize the software implementation (used as a fall back if no SSE4 is present) as much as possible. Currently I have the following software implementation of the function: inline int population_count64(unsigned __int64 w) { w -= (w 1) & 0x5555555555555555ULL; w = (w & 0x3333333333333333ULL) + ((w 2) & 0x3333333333333333ULL); w = (w + (w 4)) & 0x0f0f0f0f0f0f0f0fULL; return int(w * 0x0101010101010101ULL) 56; } So to summarize: (1) I would like to know if it is possible to optimize this for the case when I only want to know the value to a maximum of 15. (2) Is there a faster software implementation (for both Intel and AMD CPU's) than the function above?

    Read the article

  • php / phpDoc - @return instance of $this class ?

    - by searbe
    How do I mark a method as "returns an instance of the current class" in my phpDoc? In the following example my IDE (Netbeans) will see that setSomething always returns a foo object. But that's not true if I extent the object - it'll return $this, which in the second example is a bar object not a foo object. class foo { protected $_value = null; /** * Set something * * @param string $value the value * @return foo */ public function setSomething($value) { $this->_value = $value; return $this; } } $foo = new foo(); $out = $foo->setSomething(); So fine - setSomething returns a foo - but in the following example, it returns a bar..: class bar extends foo { public function someOtherMethod(){} } $bar = new bar(); $out = $bar->setSomething(); $out->someOtherMethod(); // <-- Here, Netbeans will think $out // is a foo, so doesn't see this other // method in $out's code-completion ... it'd be great to solve this as for me, code completion is a massive speed-boost. Anyone got a clever trick, or even better, a proper way to document this with phpDoc?

    Read the article

  • how to pass vector of string to foo(char const *const *const)?

    - by user347208
    Hi, This is my first post so please be nice. I searched in this forum and googled but I still can not find the answer. This problem has bothered me for more than a day, so please give me some help. Thank you. I need to pass a vector of string to a library function foo(char const *const *const). I can not pass the &Vec[0] since it's a pointer to a string. Therefore, I have an array and pass the c_str() to that array. The following is my code (aNames is the vector of string): const char* aR[aNames.size()]; std::transform(aNames.begin(), aNames.end(), aR, boost::bind(&std::string::c_str, _1)); foo(aR); However, it seems it causes some undefined behavior: If I run the above code, then the function foo throw some warnings about illegal characters ('èI' blablabla) in aR. If I print aR before function foo like this: std::copy(aR, aR+rowNames.size(), std::ostream_iterator<const char*>(std::cout, "\n")); foo(aR); Then, everything is fine. My questions are: Does the conversion causes undefined behavior? If so, why? What is the correct way to pass vector of string to foo(char const *const *const)? Thank you very much for your help!

    Read the article

  • C++0x Smart Pointer Comparisons: Inconsistent, what's the rationale?

    - by GManNickG
    In C++0x (n3126), smart pointers can be compared, both relationally and for equality. However, the way this is done seems inconsistent to me. For example, shared_ptr defines operator< be equivalent to: template <typename T, typename U> bool operator<(const shared_ptr<T>& a, const shared_ptr<T>& b) { return std::less<void*>()(a.get(), b.get()); } Using std::less provides total ordering with respect to pointer values, unlike a vanilla relational pointer comparison, which is unspecified. However, unique_ptr defines the same operator as: template <typename T1, typename D1, typename T2, typename D2> bool operator<(const unique_ptr<T1, D1>& a, const unique_ptr<T2, D2>& b) { return a.get() < b.get(); } It also defined the other relational operators in similar fashion. Why the change in method and "completeness"? That is, why does shared_ptr use std::less while unique_ptr uses the built-in operator<? And why doesn't shared_ptr also provide the other relational operators, like unique_ptr? I can understand the rationale behind either choice: with respect to method: it represents a pointer so just use the built-in pointer operators, versus it needs to be usable within an associative container so provide total ordering (like a vanilla pointer would get with the default std::less predicate template argument) with respect to completeness: it represents a pointer so provide all the same comparisons as a pointer, versus it is a class type and only needs to be less-than comparable to be used in an associative container, so only provide that requirement But I don't see why the choice changes depending on the smart pointer type. What am I missing? Bonus/related: std::shared_ptr seems to have followed from boost::shared_ptr, and the latter omits the other relational operators "by design" (and so std::shared_ptr does too). Why is this?

    Read the article

  • [c++/STL] Selective iterator

    - by rubenvb
    FYI: no boost, yes it has this, I want to reinvent the wheel ;) Is there some form of a selective iterator (possible) in C++? What I want is to seperate strings like this: some:word{or other to a form like this: some : word { or other I can do that with two loops and find_first_of(":") and ("{") but this seems (very) inefficient to me. I thought that maybe there would be a way to create/define/write an iterator that would iterate over all these values with for_each. I fear this will have me writing a full-fledged custom way-too-complex iterator class for a std::string. So I thought maybe this would do: std::vector<size_t> list; size_t index = mystring.find(":"); while( index != std::string::npos ) { list.push_back(index); index = mystring.find(":", list.back()); } std::for_each(list.begin(), list.end(), addSpaces(mystring)); This looks messy to me, and I'm quite sure a more elegant way of doing this exists. But I can't think of it. Anyone have a bright idea? Thanks PS: I did not test the code posted, just a quick write-up of what I would try

    Read the article

  • Can someone explain RAID-0 in plain English?

    - by Edward Tanguay
    I've heard about and read about RAID throughout the years and understand it theoretically as a way to help e.g. server PCs reduce the chance of data loss, but now I am buying a new PC which I want to be as fast as possible and have learned that having two drives can considerably increase the perceived performance of your machine. In the question Recommendations for hard drive performance boost, the author says he is going to RAID-0 two 7200 RPM drives together. What does this mean in practical terms for me with Windows 7 installed, e.g. can I buy two drives, go into the device manager and "raid-0 them together"? I am not a network administrator or a hardware guy, I'm just a developer who is going to have a computer store build me a super fast machine next week. I can read the wikipedia page on RAID but it is just way too many trees and not enough forest to help me build a faster PC: RAID-0: "Striped set without parity" or "Striping". Provides improved performance and additional storage but no redundancy or fault tolerance. Because there is no redundancy, this level is not actually a Redundant Array of Inexpensive Disks, i.e. not true RAID. However, because of the similarities to RAID (especially the need for a controller to distribute data across multiple disks), simple strip sets are normally referred to as RAID 0. Any disk failure destroys the array, which has greater consequences with more disks in the array (at a minimum, catastrophic data loss is twice as severe compared to single drives without RAID). A single disk failure destroys the entire array because when data is written to a RAID 0 drive, the data is broken into fragments. The number of fragments is dictated by the number of disks in the array. The fragments are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking so any error is unrecoverable. More disks in the array means higher bandwidth, but greater risk of data loss. So in plain English, how can "RAID-0" help me build a faster Windows-7 PC that I am going to order next week?

    Read the article

  • Sysadmin 101: How can I figure out why my server crashes and monitor performance?

    - by bflora
    I have a Drupal-powered site that seems to have neverending performance problems. It was butt-slow about 5 months ago. I brought in some guys who installed nginx for anonymous visitors, ajaxified a few queries so they wouldn't fire during page load, and helped me find a few bottlenecks in the code. For about a month, the site was significantly faster, though not "fast" by any stretch of the word. Meanwhile, I'm now shelling out $400/month to Slicehost to host a site that gets less than 5,000/uniques a day. Yes, you read that right. Go Drupal. Recently the site started crashing again and is slow again. I can't afford to hire people to come in, study my code from top to bottom, and make changes that may or may not help anymore. And I can't afford to throw more hardware at the problem. So I need to figure out what the problem is myself. Questions: When apache crashes, is it possible to find out what caused it to crash? There has to be a way, right? If so, how can I do this? Is there software I can use that will tell me which process caused my server to die? (e.g. "Apache crashed because someone visited page X." or "Apache crashed because you were importing too many RSS items from feed X.") There's got to be a way to learn this, right? What's a good, noob-friendly way to monitor my current apache performance? My developer friends tell me to "just use Top, dude," but Top shows me a bunch of numbers without any context. I have no clue what qualifies as a bad number or a good number in Top, or which processes are relevant and which aren't. Are there any noob-friendly server monitoring tools out there? Ideally, I could have a page that would give me a color-coded indicator about how apache is performing and then show me a list of processes or pages that are sucking right now. This way, I could know when performance is bad and then what's causing it to be so bad. Why does PHP memory matter? My apparently has a 30MB memory foot print. Will it run faster if I bring that number down? Thanks for any advice. I spent a year or so trying to boost my advertising income so I could hire a contractor to solve my performance woes. I didn't want to have to learn all this sysadmin voodoo. I'm now resigned to the fact that might not have a choice.

    Read the article

  • How can I verify that my SSD is performing as it should?

    - by Jon Skeet
    EDIT: Okay, so I've no idea what caused the change, but after trying loads of different things to work out what was wrong, I've rerun the WEI (about the 4th time in total) and the score has jumped to a far more respectable 7.3. I'm going to leave well alone now :) I've got a brand new 256GB SSD (Crucial CT256M225) which should have stellar performance. However, on my (also brand new) Dell Studio 1557 with Windows 7 Professional 64 bit, it's only giving a performance index of 5.9. I realise the performance index should be taken with a bit of a pinch of salt, but I wonder whether something's wrong. Given this paragraph from this MSDN article on Windows 7, I'd expect to see a high 6.X or possible a 7.X figure: In Windows 7, there are new random read, random write and flush assessments. Better SSDs can score above 6.5 all the way to 7.9. To be included in that range, an SSD has to have outstanding random read rates and be resilient to flush and random write workloads. In the Beta timeframe of Windows 7, there was a capping of scores at 1.9, 2.9 or the like if a disk (SSD or HDD) didn’t perform adequately when confronted with our random write and flush assessments. Feedback on this was pretty consistent, with most feeling the level of capping to be excessive. As a result, we now simply restrict SSDs with performance issues from joining the newly added 6.0+ and 7.0+ ranges. SSDs that are not solid performers across all assessments effectively get scored in a manner similar to what they would have been in Windows Vista, gaining no Win7 boost for great random read performance. How can I diagnose any performance issues with either the disk or how Windows 7 is handling it? Are there any particularly good tools you'd recommend? One note of curiosity: I couldn't install the firmware update (to 1916) until I changed my BIOS handling of the drive to ATA mode; after installing the firmware I tried to boot the Windows installation DVD - but that only worked after turning it back to AHCI mode (which I've left it in). Installing Windows 7 took longer than I expected - it sat at the "Windows is loading files" prompt for a very long time. Likewise it was on "Expanding files (0%)" for a long time. Since installation it's been fine though - but I don't know whether it's really providing quite as beefy performance as it should. EDIT: My netbook with the 64GB equivalent drive has a performance index of 6.6...

    Read the article

  • Crashes and freezes after fixing "BOOTMGR is missing" error

    - by Greg-J
    I came back from a 3-day weekend to a computer that was off. I leave my PC on 24/7, so this was odd. Turn it on to get the dreaded "BOOTMGR is missing" screen. Two attempts at Windows Recovery and it booted into Windows fine. After an hour or so, I get a frozen Chrome and my start bar disappears. Ctrl+Alt+Del brings up an error box telling me that Ctrl+Alt+Del failed to work properly. Clicking on any open application triggers an error (I can't recall the error now, but it essentially just said that the application couldn't be found running or something along those lines). I restart, and again, the same thing happens after a while of use. I turn it on, install the 47 updates I have or so, and then restart it. After a while of use (under an hour), it just freezes completely. My thoughts are: SSDs, RAM or PS. My system specs below: (RAID0) 2 x Crucial M4 CT128M4SSD2 2.5" 128GB SATA III MLC Internal Solid State Drive (SSD) CORSAIR Vengeance 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model CML16GX3M4A1600C9 CORSAIR HX Series HX750 750W ATX12V 2.3 / EPS12V 2.91 SLI Ready CrossFire Ready 80 PLUS GOLD Certified Modular Active 1 x ASUS Maximus IV Gene-Z/GEN3 LGA 1155 Intel Z68 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard 1 x Hitachi GST Deskstar 7K1000.C 0F10383 1TB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive 1 x Intel Core i7-2600K Sandy Bridge 3.4GHz (3.8GHz Turbo Boost) LGA 1155 95W Quad-Core Desktop Processor Intel HD Graphics 1 x SAPPHIRE 21197-00-40G Radeon HD 7970 3GB 384-bit GDDR5 PCI Express 3.0 x16 HDCP Ready CrossFireX Support Video Card 1 x Noctua NH-D14 120mm & 140mm SSO CPU Cooler This is all crammed in a pretty small case (NZXT Vulcan) and has been running perfectly problem-free since January. The only thing out of the ordinary is that there is a fan in the case that is now making noise whereas the case has previously been completely silent. I have no reason to believe this is anything more then correlation, but felt it is worth mentioning. I believe it MAY be the SSDs simply because of the BOOTMGR error, but not sure how to test that theory. My belief that it may be the RAM is simply from experience with frozen machines. I haven't had the time to memtest it, but will. The PS being the culprit is something I've picked up by reading similar threads on various forums, and it seems plausible. I am unsure how to test this though. ANY insight whatsover would be greatly appreciated!

    Read the article

  • Qt vs .NET - plz no n00bs who don't know wtf they're talking about [closed]

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people don't know WTF they're talking about. Trying to get a real comparison chart going before we embark on a major fucking project. And yes I'm drunk, and yes I use cocaine. Event Handling In Qt the event handling system you just emit signals when something cool happens and then catch them in slots, for instance emit valueChanged(int percent, bool something); and void MyCatcherObj::valueChanged(int p, bool ok){} blocking them and disconnecting them when needed, doing it across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET delegate crap is the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down i m o. Basically, the footprints make more sense and you can visualize the project easier without the clunky event handling system. I wish I could it explain it better. The only thing is, I do love some of the ease of C# compared to C++ and .NET's assembly architecture. That is a big bonus for modular projects, which are a PITA to do in C++. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • Django FileField not saving to upload_to location

    - by Erik
    I have an Attachment model that has a FileField in a Django 1.4.1 app. This FileField has a callable upload_to parameter which, per the Django docs should be called when the form (and therefore the model) is saved. When I run FormTest below, the upload_to callable is never called and the file therefore does not appear in the location provided by the upload_to method. What am I doing wrong? Notice that in the passing tests in ModelTest (also below), the upload_to method works as expected. Test: from core.forms.attachments import AttachmentForm from django.test import TestCase import unittest from django.core.files.uploadedfile import SimpleUploadedFile from django.core.files.storage import default_storage def suite(): return unittest.TestSuite( [ unittest.TestLoader().loadTestsFromTestCase(FormTest), ] ) class FormTest(TestCase): def test_form_1(self): filename = 'filename' f = file(filename) data = {'name':'name',} file_data = {'attachment_file':SimpleUploadedFile(f.name,f.read()),} form = AttachmentForm(data=data,files=file_data) self.assertTrue(form.is_valid()) attachment = form.save() root_directory = 'attachments' upload_location = root_directory + '/' + attachment.directory + '/' + filename self.assertTrue(attachment.attachment_file) # Fails self.assertTrue(default_storage.exists(upload_location)) # Fails Attachment Model: from django.db import models from parent_mixins import Parent_Mixin import uuid from django.db.models.signals import pre_delete,pre_save from dirtyfields import DirtyFieldsMixin def upload_to(instance,filename): return 'attachments/' + instance.directory + '/' + filename def uuid_directory_name(): return uuid.uuid4().hex class Attachment(DirtyFieldsMixin,Parent_Mixin,models.Model): attachment_file = models.FileField(blank=True,null=True,upload_to=upload_to) directory = models.CharField(blank=False,default=uuid_directory_name,null=False,max_length=32) name = models.CharField(blank=False,default=None,null=False,max_length=128) class Meta: app_label = 'core' def __str__(self): return unicode(self).encode('utf-8') def __unicode__(self): return unicode(self.name) @models.permalink def get_absolute_url(self): return('core_attachments_update',(),{'pk': self.pk}) # def save(self,*args,**kwargs): # super(Attachment,self).save(*args,**kwargs) def pre_delete_callback(sender, instance, *args, **kwargs): if not isinstance(instance, Attachment): return if not instance.attachment_file: return instance.attachment_file.delete(save=False) def pre_save_callback(sender, instance, *args, **kwargs): if not isinstance(instance, Attachment): return if not instance.attachment_file: return if instance.is_dirty(): dirty_fields = instance.get_dirty_fields() if 'attachment_file' in dirty_fields: old_attachment_file = dirty_fields['attachment_file'] old_attachment_file.delete() pre_delete.connect(pre_delete_callback) pre_save.connect(pre_save_callback) Attachment Form: from ..models.attachments import Attachment from crispy_forms.helper import FormHelper from crispy_forms.layout import Div,Layout,HTML,Field,Fieldset,Button,ButtonHolder,Submit from django import forms class AttachmentFormHelper(FormHelper): form_tag=False layout = Layout( Div( Div( Field('name',css_class='span4'), Field('attachment_file',css_class='span4'), css_class='span4', ), css_class='row', ), ) class AttachmentForm(forms.ModelForm): helper = AttachmentFormHelper() class Meta: fields=('attachment_file','name') model = Attachment class AttachmentInlineFormHelper(FormHelper): form_tag=False form_style='inline' layout = Layout( Div( Div( Field('name',css_class='span4'), Field('attachment_file',css_class='span4'), Field('DELETE',css_class='span4'), css_class='span4', ), css_class='row', ), ) class AttachmentInlineForm(forms.ModelForm): helper = AttachmentInlineFormHelper() class Meta: fields=('attachment_file','name') model = Attachment UPDATE I also do testing on the Attachment model class with these unit tests -- which all pass: from core.models.attachments import Attachment from core.models.attachments import upload_to from django.test import TestCase import unittest from django.core.files.storage import default_storage from django.core.files.base import ContentFile def suite(): return unittest.TestSuite( [ unittest.TestLoader().loadTestsFromTestCase(ModelTest), ] ) class ModelTest(TestCase): def test_model_minimum_fields(self): attachment = Attachment(name='name') attachment.attachment_file.save('test.txt',ContentFile("hello world")) attachment.save() self.assertEqual(str(attachment),'name') self.assertEqual(unicode(attachment),'name') self.assertTrue(attachment.directory) # def test_model_full_fields(self): # attachment = Attachment() # attachement.save() def test_file_operations_basic(self): root_directory = 'attachments' filename = 'test.txt' attachment = Attachment(name='name') attachment.attachment_file.save(filename,ContentFile('test')) attachment.save() upload_location = root_directory + '/' + attachment.directory + '/' + filename self.assertEqual(upload_to(attachment,filename),upload_location) self.assertTrue(default_storage.exists(upload_location)) def test_file_operations_delete(self): root_directory = 'attachments' filename = 'test.txt' attachment = Attachment(name='name') attachment.attachment_file.save(filename,ContentFile('test')) attachment.save() upload_location = upload_to(attachment,filename) attachment.delete() self.assertFalse(default_storage.exists(upload_location)) def test_file_operations_change(self): root_directory = 'attachments' filename_1 = 'test_1.txt' attachment = Attachment(name='name') attachment.attachment_file.save(filename_1,ContentFile('test')) attachment.save() upload_location_1 = upload_to(attachment,filename_1) self.assertTrue(default_storage.exists(upload_location_1)) filename_2 = 'test_2.txt' attachment.attachment_file.save(filename_2,ContentFile('test')) attachment.save() upload_location_2 = upload_to(attachment,filename_2) self.assertTrue(default_storage.exists(upload_location_2)) self.assertFalse(default_storage.exists(upload_location_1))

    Read the article

  • Advice on logic circuits and serial communications

    - by Spencer Ruport
    As far as I understand the serial port so far, transferring data is done over pin 3. As shown here: There are two things that make me uncomfortable about this. The first is that it seems to imply that the two connected devices agree on a signal speed and the second is that even if they are configured to run at the same speed you run into possible synchronization issues... right? Such things can be handled I suppose but it seems like there must be a simpler method. What seems like a better approach to me would be to have one of the serial port pins send a pulse that indicates that the next bit is ready to be stored. So if we're hooking these pins up to a shift register we basically have: (some pulse pin)-clk, tx-d Is this a common practice? Is there some reason not to do this? EDIT Mike shouldn't have deleted his answer. This I2C (2 pin serial) approach seems fairly close to what I did. The serial port doesn't have a clock you're right nobugz but that's basically what I've done. See here: private void SendBytes(byte[] data) { int baudRate = 0; int byteToSend = 0; int bitToSend = 0; byte bitmask = 0; byte[] trigger = new byte[1]; trigger[0] = 0; SerialPort p; try { p = new SerialPort(cmbPorts.Text); } catch { return; } if (!int.TryParse(txtBaudRate.Text, out baudRate)) return; if (baudRate < 100) return; p.BaudRate = baudRate; for (int index = 0; index < data.Length * 8; index++) { byteToSend = (int)(index / 8); bitToSend = index - (byteToSend * 8); bitmask = (byte)System.Math.Pow(2, bitToSend); p.Open(); p.Parity = Parity.Space; p.RtsEnable = (byte)(data[byteToSend] & bitmask) > 0; s = p.BaseStream; s.WriteByte(trigger[0]); p.Close(); } } Before anyone tells me how ugly this is or how I'm destroying my transfer speeds my quick answer is I don't care about that. My point is this seems much much simpler than the method you described in your answer nobugz. And it wouldn't be as ugly if the .Net SerialPort class gave me more control over the pin signals. Are there other serial port APIs that do?

    Read the article

  • crash in calloc

    - by mmd
    I'm trying to debug a program I wrote. I ran it inside gdb and I managed to catch a SIGABRT from inside calloc(). I'm completely confused about how this can arise. Can it be a bug in gcc or even libc?? More details: My program uses OpenMP. I ran it through valgrind in single-threaded mode with no errors. I also use mmap() to load a 40GB file, but I doubt that is relevant. Inside gdb, I'm running with 30 threads. Several identical runs (same input&CL) finished correctly, until the problematic one that I caught. On the surface this suggests there might be a race condition of some type. However, the SIGABRT comes from calloc() which is out of my control. Here is some relevant gdb output: (gdb) info threads [...] * 11 Thread 0x7ffff0056700 (LWP 73449) 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 [...] (gdb) thread 11 [Switching to thread 11 (Thread 0x7ffff0056700 (LWP 73449))]#0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 #1 0x00007ffff6a96085 in abort () from /lib64/libc.so.6 #2 0x00007ffff6ad1fe7 in __libc_message () from /lib64/libc.so.6 #3 0x00007ffff6ad7916 in malloc_printerr () from /lib64/libc.so.6 #4 0x00007ffff6adb79f in _int_malloc () from /lib64/libc.so.6 #5 0x00007ffff6adbdd6 in calloc () from /lib64/libc.so.6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 #7 read_get_hit_list_per_strand (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/mapping.c:1046 #8 0x000000000041308a in read_get_hit_list (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1239 #9 handle_read (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1806 #10 0x0000000000404f35 in launch_scan_threads (.omp_data_i=<value optimized out>) at gmapper/gmapper.c:557 #11 0x00007ffff7230502 in ?? () from /usr/lib64/libgomp.so.1 #12 0x00007ffff6dfc851 in start_thread () from /lib64/libpthread.so.0 #13 0x00007ffff6b4a11d in clone () from /lib64/libc.so.6 (gdb) f 6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 286 res = calloc(size, 1); (gdb) p size $2 = 814080 (gdb) The function my_calloc() is just a wrapper, but the problem is not in there, as the real calloc() call looks legit. These are the limits set in the shell: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 2067285 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited The program is not out of memory, it's using 41GB on a machine with 256GB available: $ top -b -n 1 | grep gmapper 73437 user 20 0 41.5g 16g 15g T 0.0 6.6 55:17.24 gmapper-ls $ free -m total used free shared buffers cached Mem: 258437 195567 62869 0 82 189677 -/+ buffers/cache: 5807 252629 Swap: 0 0 0 I compiled using gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), with flags -g -O2 -DNDEBUG -mmmx -msse -msse2 -fopenmp -Wall -Wno-deprecated -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS.

    Read the article

  • QT vs. Net - REAL comparisons for R.A.D. projects

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people argue about the dumbest crap. Trying to get a real comparison chart here, because I know a little about both frameworks but I don't know everything. I believe Qt and .NET both have strengths and weaknesses. This is to make a comparison that highlights these so people can make more informed decisions before embarking on a project, in the spirit of R.A.D. Event Handling In Qt the event handling system is very simple. You just emit signals when something cool happens and then catch them in slots. ie. // run some calculations, then emit valueChanged(30, false, 20.2); and then catching it, any object can make a slot to recieve that message easily void MyObj::valueChanged(int percent, bool ok, float timeRemaining). It's easy to "block" an event or "disconnect" when needed, and works seamlessly across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, void valueChanged(object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET anonymous delegates are the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down for the simplest yet still flexible event handling system ever i m o. Plugins and such I do love some of the ease of C# compared to C++, as well as .NET's assembly architecture, even though it leads to a bunch of .dll's (there's ways to combine everything into a single exe though). That is a big bonus for modular projects, which are a PITA to import stuff in C++ as far as RAD is concerned. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. QConcurrent is the simplest threading mechanism I've ever played with. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? Doesn't the just-in-time compiler make native code that is pretty good, like and that only happens the first time the program is run? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • Lifetime issue of IDisposable unmanaged resources in a complex object graph?

    - by stakx
    This question is about dealing with unmanaged resources (COM interop) and making sure there won't be any resource leaks. I'd appreciate feedback on whether I seem to do things the right way. Background: Let's say I've got two classes: A class LimitedComResource which is a wrapper around a COM object (received via some API). There can only be a limited number of those COM objects, therefore my class implements the IDisposable interface which will be responsible for releasing a COM object when it's no longer needed. Objects of another type ManagedObject are temporarily created to perform some work on a LimitedComResource. They are not IDisposable. To summarize the above in a diagram, my classes might look like this: +---------------+ +--------------------+ | ManagedObject | <>------> | LimitedComResource | +---------------+ +--------------------+ | o IDisposable (I'll provide example code for these two classes in just a moment.) Question: Since my temporary ManagedObject objects are not disposable, I obviously have no control over how long they'll be around. However, in the meantime I might have Disposed the LimitedComObject that a ManagedObject is referring to. How can I make sure that a ManagedObject won't access a LimitedComResource that's no longer there? +---------------+ +--------------------+ | managedObject | <>------> | (dead object) | +---------------+ +--------------------+ I've currently implemented this with a mix of weak references and a flag in LimitedResource which signals whether an object has already been disposed. Is there any better way? Example code (what I've currently got): LimitedComResource: class LimitedComResource : IDisposable { private readonly IUnknown comObject; // <-- set in constructor ... void Dispose(bool notFromFinalizer) { if (!this.isDisposed) { Marshal.FinalReleaseComObject(comObject); } this.isDisposed = true; } internal bool isDisposed = false; } ManagedObject: class ManagedObject { private readonly WeakReference limitedComResource; // <-- set in constructor ... public void DoSomeWork() { if (!limitedComResource.IsAlive()) { throw new ObjectDisposedException(); // ^^^^^^^^^^^^^^^^^^^^^^^ // is there a more suitable exception class? } var ur = (LimitedComResource)limitedComResource.Target; if (ur.isDisposed) { throw new ObjectDisposedException(); } ... // <-- do something sensible here! } }

    Read the article

  • Why can't I use __getattr__ with Django models?

    - by Joshmaker
    I've seen examples online of people using __getattr__ with Django models, but whenever I try I get errors. (Django 1.2.3) I don't have any problems when I am using __getattr__ on normal objects. For example: class Post(object): def __getattr__(self, name): return 42 Works just fine... >>> from blog.models import Post >>> p = Post() >>> p.random 42 Now when I try it with a Django model: from django.db import models class Post(models.Model): def __getattr__(self, name): return 42 And test it on on the interpreter: >>> from blog.models import Post >>> p = Post() ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (6, 0)) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /Users/josh/project/ in () /Users/josh/project/lib/python2.6/site-packages/django/db/models/base.pyc in init(self, *args, **kwargs) 338 if kwargs: 339 raise TypeError("'%s' is an invalid keyword argument for this function" % kwargs.keys()[0]) -- 340 signals.post_init.send(sender=self.class, instance=self) 341 342 def repr(self): /Users/josh/project/lib/python2.6/site-packages/django/dispatch/dispatcher.pyc in send(self, sender, **named) 160 161 for receiver in self._live_receivers(_make_id(sender)): -- 162 response = receiver(signal=self, sender=sender, **named) 163 responses.append((receiver, response)) 164 return responses /Users/josh/project/python2.6/site-packages/photologue/models.pyc in add_methods(sender, instance, signal, *args, **kwargs) 728 """ 729 if hasattr(instance, 'add_accessor_methods'): -- 730 instance.add_accessor_methods() 731 732 # connect the add_accessor_methods function to the post_init signal TypeError: 'int' object is not callable Can someone explain what is going on? EDIT: I may have been too abstract in the examples, here is some code that is closer to what I actually would use on the website: class Post(models.Model): title = models.CharField(max_length=255) slug = models.SlugField() date_published = models.DateTimeField() content = RichTextField('Content', blank=True, null=True) # Etc... Class CuratedPost(models.Model): post = models.ForeignKey('Post') position = models.PositiveSmallIntegerField() def __getattr__(self, name): ''' If the user tries to access a property of the CuratedPost, return the property of the Post instead... ''' return self.post.name # Etc... While I could create a property for each attribute of the Post class, that would lead to a lot of code duplication. Further more, that would mean anytime I add or edit a attribute of the Post class I would have to remember to make the same change to the CuratedPost class, which seems like a recipe for code rot.

    Read the article

  • Qt vs .NET - a few comparisons [closed]

    - by Pirate for Profit
    Event Handling In Qt the event handling system you just emit signals when something cool happens and then catch them in slots, for instance emit valueChanged(int percent, bool something); and void MyCatcherObj::valueChanged(int p, bool ok){} blocking them and disconnecting them when needed, doing it across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET delegate crap is the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down i m o. Basically, the footprints make more sense and you can visualize the project easier without the clunky event handling system. I wish I could it explain it better. The only thing is, I do love some of the ease of C# compared to C++ and .NET's assembly architecture. That is a big bonus for modular projects, which are a PITA to do in C++. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • Getting a KeyError in DB backend of django-digest

    - by rtmie
    I have just started to integrate django_digest into my app. As a start I have added the @httpdigest decorator to one of my views. If I try to connect to it I get a KeyError exception thrown in django_digest/backend/db.py . Depending on which db I configure I get a different KeyError in a different location. I am using Django 1.2.1, with MySql (also tested with sqlite). I am using the default values for all the settings options. As far as I can see I have followed all instructions but am struggling all day with this. I am using the repository versions of django-digest and python-digest. Any steer would be greatly appreciated. Tracebacks for sqlite and mysql below: with sqlite: Traceback (most recent call last): File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/servers/basehttp.py", line 674, in __call__ return self.application(environ, start_response) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/wsgi.py", line 248, in __call__ signals.request_finished.send(sender=self.__class__) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/dispatch/dispatcher.py", line 162, in send response = receiver(signal=self, sender=sender, **named) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/backend/db.py", line 16, in close_connection _connection.close() File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/db/backends/sqlite3/base.py", line 186, in close if self.settings_dict['NAME'] != ":memory:": KeyError: 'NAME' with mysql: Traceback (most recent call last): File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/servers/basehttp.py", line 674, in __call__ return self.application(environ, start_response) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/wsgi.py", line 241, in __call__ response = self.get_response(request) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/base.py", line 142, in get_response return self.handle_uncaught_exception(request, resolver, exc_info) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/base.py", line 166, in handle_uncaught_exception return debug.technical_500_response(request, *exc_info) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/base.py", line 80, in get_response response = middleware_method(request) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/middleware.py", line 13, in process_request if (not self._authenticator.authenticate(request) and File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/__init__.py", line 86, in authenticate partial_digest = self._account_storage.get_partial_digest(digest_response.username) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/backend/db.py", line 97, in get_partial_digest cursor = get_connection().cursor() File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/db/backends/__init__.py", line 75, in cursor cursor = self._cursor() File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/db/backends/mysql/base.py", line 281, in _cursor if settings_dict['USER']: KeyError: 'USER'

    Read the article

  • iPhone App is leaking memory; Instruments and Clang cannot find the leak

    - by Norbert
    Hi, i've developed an iPhone program which is kind of an image manipulation program: The user get an UIImagePickerController and selects an image. Then the program does some heavy calculating in a new thread (for responsiveness of the application). The thread has, of course, its own autorelease pool. When calculation is done, the seperated thread signals the main thread that the result can be presented. The app creates a new view controller, pushes it onto the navigation controller. In short: UIImagePickerController new thread (autorelease pool) does some heavy calculation with image data signal to main thread that it's done main thread creates view controller and pushes it onto navigation controller view controller presents image result My program works well, but if I dismiss the navigation controller's top view controller by tapping on the back button and repeat the whole process several times, my app crashes. But only on the device! Instruments cannot find any leaks (except for some minor ones which I don't feel responsible for: thread creation, NSCFString; overall about 10 kB). Even Clang static analyzer tells me that my could seems to be all right. I know that the UIImage class can cache images and objects returned from convenience methods get freed only whet their autorelease pool gets drained. But most of the time I work with CGImageRef and I use UIImage' alloc, init & release methods to free memory as soon as possible. Currently, I don't know how to isolate the problem. How would you approach this problem? Crash Log: Incident Identifier: F4C202C9-1338-48FC-80AD-46248E6C7154 CrashReporter Key: bb6f526d8b9bb680f25ea8e93bb071566ccf1776 OS Version: iPhone OS 3.1.1 (7C145) Date: 2009-09-26 14:18:57 +0200 Free pages: 372 Wired pages: 7754 Purgeable pages: 0 Largest process: _MY_APP_ Processes Name UUID Count resident pages _MY_APP_ <032690e5a9b396058418d183480a9ab3> 17766 (jettisoned) (active) debugserver <ec29691560aa0e2994f82f822181bffd> 107 syslog_relay <21e13fa2b777218bdb93982e23fb65d3> 62 notification_pro <8a7725017106a28b545fd13ed58bf98c> 64 notification_pro <8a7725017106a28b545fd13ed58bf98c> 64 afcd <98b45027fbb1350977bf1ca313dee527> 65 mediaserverd <eb8fe997a752407bea573cd3adf568d3> 319 ptpd <b17af9cf6c4ad16a557d6377378e8a1e> 142 syslogd <ec8a5bc4483638539fa1266363dee8b8> 68 BTServer <1bb74831f93b1d07c48fb46cc31c15da> 119 apsd <a639ba83e666cc1d539223923ce59581> 165 notifyd <2ed3a1166da84d8d8868e64d549cae9d> 101 CommCenter <f4239480a623fb1c35fa6c725f75b166> 161 SpringBoard <8919df8091fdfab94d9ae05f513c0ce5> 2681 (active) accessoryd <b66bcf6e77c3ee740c6a017f54226200> 90 configd <41e9d763e71dc0eda19b0afec1daee1d> 275 fairplayd <cdce5393153c3d69d23c05de1d492bd4> 108 mDNSResponder <f3ef7a6b24d4f203ed147f476385ec53> 103 lockdownd <6543492543ad16ff0707a46e512944ff> 297 launchd <73ce695fee09fc37dd70b1378af1c818> 71 **End**

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >