Search Results

Search found 5205 results on 209 pages for 'extra'.

Page 162/209 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Hiding/blocking tabs using windows forms in c#

    - by Audel
    The thing is that i have a 'log in window' and a 'mainwindow' that is called after pressing the log in button or the "VISITANT" button If pressing the log in button, the whole system will come out, and if i press the VISITANT button, one tab should disappear or be blocked or something. private void visitant(object sender, EventArgs e) { mainwindow menu = new mainwindow(); menu.Show(); //mainwindow.tabPage1.Enabled = false; //attempt1 //mainwindow.tabPage1.Visible = false; //attempt1 //System.Windows.Forms.tabPage1.Enabled = false;//attempt2 //System.Windows.Forms.tabPage1.Visible = false;//attempt2 this.Hide(); } the errors i get for using the attempt1 are Error 1 'System.mainwindow.tabPage1' is inaccessible due to its protection level' Error 2 An object reference is required for the non-static field, method, or property 'System.mainwindow.tabPage1' and the one i get for using the attempt2 is Error 1 The type or namespace name 'tabPage1' does not exist in the namespace 'System.Windows.Forms' (are you missing an assembly reference?) as you probably have guessed "tabPage1" is the tab i need to hide when pressing the visitant button. I can't think of any more details, I will be around to provide any extra information Thanks in advance.

    Read the article

  • ISO/IEC Website and Charging for C and C++ Standards

    - by Michael Aaron Safyan
    The ISO C Standard (ISO/IEC 9899) and the ISO C++ Standard (ISO/IEC 14882) are not published online; instead, one must purchase the PDF for each of those standards. I am wondering what the rationale is behind this... is it not detrimental to both the C and C++ programming languages that the authoritative specification for these languages is not made freely available and searchable online? Doesn't this encourage the use of possibly inaccurate, non-authoritative sources for information regarding these standards? While I understand that much time and effort has gone into developing the C and C++ standards, I am still somewhat puzzled by the choice to charge for the specification. The OpenGroup Base Specification, for example, is available for free online; they make money buy charging for certification. Does anyone know why the ISO standards committees don't make their revenue in certifying standards compliance, instead of charging for these documents? Also, does anyone know if the ISO standards committee's atrociously looking website is intentionally made to look that way? It's as if they don't want people visiting and buying the spec. One last thing... the C and C++ standards are generally described as "open standards"... while I realize that this means that anyone is permitted to implement the standard, should that definition of "open" be revised? Charging for the standard rather than making it openly available seems contrary to the spirit of openness. P.S. I do have a copy of the ISO/IEC 9899:1999 and ISO/IEC 14882:2003, so please no remarks about being cheap or anything... although if you are tempted to say such things, you might want to consider the high school, undergraduate, and graduate students who might not have all that much extra cash. Also, you might want to consider the fact that the ISO website is really sketchy and they don't even tell you the cost until you proceed to the checkout... doesn't really encourage one to go and get a copy, now does it?

    Read the article

  • CM and Agile validation process of merging to the Trunk?

    - by LoneCM
    Hello All, We are a new Agile shop and we are encountering an issue that I hope others have seen. In our process, the Trunk is considered an integration branch; it does not have to be releasable, but it does have to be stable and functional for others to branch off of. We create Feature branches of the Trunk for new development. All work and testing occurs in these branches. An individual branch pulls up as needed to stay integrated with the Trunk as other features that are accepted and are committed. But now we have numerous feature branches. Each are focused, have a short life cycle, and are pushed to the trunk as they are completed, so we not debating the need for the branches and trying very much to be Agile. My issue comes in here: I require that the branches pull up from the Trunk at the end of their life cycle and complete the validation, regression testing and handle all configuration issues before pushing to the trunk. Once reintegrated into the Trunk, I ask for at least a build and an automated smoke test. However, I am now getting push back on the Trunk validation. The argument is that the developers can merge the code and not need the QA validation steps because they already complete the work in the feature branch. Therefore, the extra testing is not needed. I have attempted to remind management of the numerous times "brainless" merges have failed. Thier solution is to instead of build and regression testing to have the developer diff the Feature branch and the newly merged Trunk. That process in thier mind would replace the regression testing I asked for. So what do you require when you reintegrate back to the Trunk? What are the issues that we will encounter if we remove this step and replace with the diff? Is the cost of staying Agile the additional work of the intergration of the branches? Thanks for any input. LoneCM

    Read the article

  • asp.net mvc ajax helper/post additional data w/o jquery.

    - by Jopache
    I would like to use the ajax helper to create ajax requests that send additional, dynamic data with the post (for example, get the last element with class = blogCommentDateTime and send the value of that last one to the controller which will return only blog comments after it). I have successfully done so with the help of jQuery Form plugin like so: $(document).ready(function () { $("#addCommentForm").submit(function () { var lastCommentDate = $(".CommentDateHidden:last").val(); var lastCommentData = { lastCommentDateTicks: lastCommentDate }; var formSubmitParams = { data: lastCommentData, success: AddCommentResponseHandler } $("#addCommentForm").ajaxSubmit(formSubmitParams); return false; }); This form was created with html.beginform() method. I am wondering if there is an easy way to do this using the ajax.beginform() helper? When I try to use the same code but replace html.beginform() with ajax.beginform(), when i try to submit the form, I am issuing 2 posts (which is understandable, one being taken care of by the helper, the other one by me with the JS above. I can't create 2 requests, so this option is out) I tried getting rid of the return false and changing ajaxSubmit() to ajaxForm() so that it would only "prepare" the form, and this leads in only one post, but it does not include the extra parameter that I defined, so this is worthless as well. I then tried keeping the ajaxForm() but calling that whenever the submit button on the form gets clicked rather than when the form gets submitted (I guess this is almost the same thing) and that results in 2 posts also. The biggest reason I am asking this question is that I have run in to some issues with the javascript above and using mvc validation provided by the mvc framework (which i will set up another question for) and would like to know this so I can dive further in to my validation issue.

    Read the article

  • django: control json serialization

    - by abolotnov
    Is there a way to control json serialization in django? Simple code below will return serialized object in json: co = Collection.objects.all() c = serializers.serialize('json',co) The json will look similar to this: [ { "pk": 1, "model": "picviewer.collection", "fields": { "urlName": "architecture", "name": "\u0413\u043e\u0440\u043e\u0434 \u0438 \u0430\u0440\u0445\u0438\u0442\u0435\u043a\u0442\u0443\u0440\u0430", "sortOrder": 0 } }, { "pk": 2, "model": "picviewer.collection", "fields": { "urlName": "nature", "name": "\u041f\u0440\u0438\u0440\u043e\u0434\u0430", "sortOrder": 1 } }, { "pk": 3, "model": "picviewer.collection", "fields": { "urlName": "objects", "name": "\u041e\u0431\u044a\u0435\u043a\u0442\u044b \u0438 \u043d\u0430\u0442\u044e\u0440\u043c\u043e\u0440\u0442", "sortOrder": 2 } } ] You can see it's serializing it in a way that you are able to re-create the whole model, shall you want to do this at some point - fair enough, but not very handy for simple JS ajax in my case: I want bring the traffic to minimum and make the whole thing little clearer. What I did is I created a view that passes the object to a .json template and the template will do something like this to generate "nicer" json output: [ {% if collections %} {% for c in collections %} {"id": {{c.id}},"sortOrder": {{c.sortOrder}},"name": "{{c.name}}","urlName": "{{c.urlName}}"}{% if not forloop.last %},{% endif %} {% endfor %} {% endif %} ] This does work and the output is much (?) nicer: [ { "id": 1, "sortOrder": 0, "name": "????? ? ???????????", "urlName": "architecture" }, { "id": 2, "sortOrder": 1, "name": "???????", "urlName": "nature" }, { "id": 3, "sortOrder": 2, "name": "??????? ? ?????????", "urlName": "objects" } ] However, I'm bothered by the fast that my solution uses templates (an extra step in processing and possible performance impact) and it will take manual work to maintain shall I update the model, for example. I'm thinking json generating should be part of the model (correct me if I'm wrong) and done with either native python-json and django implementation but can't figure how to make it strip the bits that I don't want. One more thing - even when I restrict it to a set of fields to serialize, it will keep the id always outside the element container and instead present it as "pk" outside of it.

    Read the article

  • How to join results from two different sets in LINQ?

    - by Inez
    Hi, I get some data about customers in my database with this method: public List<KlientViewModel> GetListOfKlientViewModel() { List<KlientViewModel> list = _klientRepository.List().Select(k => new KlientViewModel { Id = k.Id, Imie = k.Imie, Nazwisko = k.Nazwisko, Nazwa = k.Nazwa, SposobPlatnosci = k.SposobPlatnosci, }).ToList(); return list; } but also I have another method which counts value for extra field in KlientViewModel - field called 'Naleznosci'. I have another method which counts value for this field based on customers ids, it looks like this: public Dictionary<int, decimal> GetNaleznosc(List<int> klientIds) { return klientIds.ToDictionary(klientId => klientId, klientId => (from z in _zdarzenieRepository.List() from c in z.Klient.Cennik where z.TypZdarzenia == (int) TypyZdarzen.Sprzedaz && z.IdTowar == c.IdTowar && z.Sprzedaz.Data >= c.Od && (z.Sprzedaz.Data < c.Do || c.Do == null) && z.Klient.Id == klientId select z.Ilosc*(z.Kwota > 0 ? z.Kwota : c.Cena)).Sum() ?? 0); } So what I want to do is to join data from method GetNaleznosc with data generated in method GetListOfKlientViewModel. I call GetNaleznosc like this: GetNaleznosc(list.Select(k => k.Id).ToList()) but don't know what to do next.

    Read the article

  • Unit Testing a rails 2.3.5 plugin

    - by brad
    I'm writing a new plugin for a rails 2.3.5 app. I've included an app directory (which makes it an engine) so i can easily load some extra routes. Not sure if that affects anything. Anyway, in the test directory i have two files: test_helper.rb and my_plugin_test.rb These files were generated automatically using script/generate plugin my_plugin When I go to vendor/plugins/my_plugin directory and run rake test they don't seem to run. I get the following console output: (in /Users/me/Repos/my_app/source/trunk/vendor/plugins/my_plugin) /Users/me/.rvm/rubies/jruby-1.4.0/bin/jruby -I"lib:lib:test" "/Users/me/.rvm/gems/jruby-1.4.0/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/my_plugin_test.rb" So it obviously sees my test file, but none of the tests inside get run, I just get back to my console prompt. What am I missing here? I figured the generated code would work out of the box Here are the two files test_helper.rb require 'rubygems' require 'active_support' require 'active_support/test_case' my_plugin_test.rb require 'test_helper' class MyPluginTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end test "Factories are supported" do assert_not_nil Factory end end File structure vendor - plugins - my_plugin - app - config - routes.rb - generators - my_plugin - some generator files.rb - lib - my_plugin.rb - my_plugin - my_plugin_lib_file.rb - rails - init.rb - Rakefile - tasks - my_plugin_tasks.rake - test - test_helper.rb - my_plugin_test.rb

    Read the article

  • Jasmine testing coffeescript expect(setTimeout).toHaveBeenCalledWith

    - by Lee Quarella
    In the process of learning Jasmine, I've come to this issue. I want a basic function to run, then set a timeout to call itself again... simple stuff. class @LoopObj constructor: -> loop: (interval) -> #do some stuff setTimeout((=>@loop(interval)), interval) But I want to test to make sure the setTimeout was called with the proper args describe "loop", -> xit "does nifty things", -> it "loops at a given interval", -> my_nifty_loop = new LoopObj interval = 10 spyOn(window, "setTimeout") my_nifty_loop.loop(interval) expect(setTimeout).toHaveBeenCalledWith((-> my_nifty_loop.loop(interval)), interval) I get this error: Expected spy setTimeout to have been called with [ Function, 10 ] but was called with [ [ Function, 10 ] ] Is this because the (-> my_nifty_loop.loop(interval)) function does not equal the (=>@loop(interval)) function? Or does it have something to do with the extra square brackets around the second [ [ Function, 10 ] ]? Something else altogther? Where have I gone wrong?

    Read the article

  • Using ZLib unit to compress files vs using ZipForge

    - by user193655
    There are many questions on zipping in Delphi, anyway this is not a duplicate. I am using ZipForge for zip/unzip capability in my application. Currently I use 2 features of ZipForge: 1) zip and unzip (!) 2) password protect the archives Now I am removing the password from all the archives so I need only to zip and unzip files. I zip them just for minimizing bandwith when uploading/downloading files from the server. So my idea is to process all files once for unzipping them (with password) and rezipping them without password. I have nothing against ZipForge, anyway it is an extra component, every time I upgrade to a newest Delphi version I have to wait for the new IDE support and moreover the more components the more problems during the installation. So since what I do is very simple I'd like to replace ZipForge with 2 simple functinos using the ZLib unit. I found (and tested) the functions here on Torry's. What do you think of using Zlib unit? Do you see any potential problem that I would not have with ZipForge? Can you comment on speed?

    Read the article

  • Multi-account sync with Dropbox API

    - by Dan
    I'm trying to create a web app that lets users share files with each other through Dropbox. At the moment, Dropbox handles all the sharing, and there's one central Dropbox account running on the web server that shares the folder with the people who want it. I'm trying to change it so people don't have to accept a new folder invitation each time. I'd like to have them authorize my app to access an app folder in their Dropbox account, and all their shared folders would go inside there. Any changes they make would get noticed by the app on the server and synced to everyone else's folders. There's a couple things I'm having trouble figuring out to make this work: Do I need to make repeated calls to /delta for every account? I can't think how else I'd do this, but that sounds like it would quickly turn into thousands of requests a minute just polling for updates. When someone adds a file, do I have to upload it once for each account? That seems like a huge waste of bandwidth. I've looked into using /copy_ref, which I think would add a file to another user's account without my app ever touching it, but my app's web interface also allows users to upload files directly to my server, which would then need to be synced with everyone else's folders. That file isn't on Dropbox's servers yet, so /copy_ref obviously wouldn't work. For a little extra context, my app is written in node.js, and I've been playing with this library to interface with Dropbox, which uses their REST API.

    Read the article

  • Cross compiling from MinGW on Fedora 12 to Windows - console window?

    - by elcuco
    After reading this article http://lukast.mediablog.sk/log/?p=155 I decided to use mingw on linux to compile windows applications. This means I can compile, test, debug and release directly from Linux. I hacked this build script which will cross compile the application and even package it in a ZIP file. Note that I am using out of source builds for QMake (did you even know this is supported? wow...). Also note that the script will pull the needed DLLs automagically. Here is the script for you all internets to use and abuse: #! /bin/sh set -x set -e VERSION=0.1 PRO_FILE=blabla.pro BUILD_DIR=mingw_build DIST_DIR=blabla-$VERSION-win32 # clean up old shite rm -fr $BUILD_DIR mkdir $BUILD_DIR cd $BUILD_DIR # start building QMAKESPEC=fedora-win32-cross qmake-qt4 QT_LIBINFIX=4 config=\"release\ quiet\" ../$PRO_FILE #qmake-qt4 -spec fedora-win32-cross make DLLS=`i686-pc-mingw32-objdump -p release/*.exe | grep dll | awk '{print $3}'` for i in $DLLS mingwm10.dll ; do f=/usr/i686-pc-mingw32/sys-root/mingw/bin/$i if [ ! -f $f ]; then continue; fi cp -av $f release done mkdir -p $DIST_DIR mv release/*.exe $DIST_DIR mv release/*.dll $DIST_DIR zip -r ../$DIST_DIR.zip $DIST_DIR The compiled binary works on the Windows7 machine I tested. Now to the questions: When I execute the application on Windows, the theme is not the Windows7 theme. I assume I am missing a style module, I am not really sure yet. The application gets a console window for some reason. The second point (the console window) is critical. How can I remove this background window? Please note that the extra config lines are not working for me, what am I missing there?

    Read the article

  • Dynamic Array traversal in PHP

    - by Kristoffer Bohmann
    I want to build a hierarchy from a one-dimensional array and can (almost) do so with a more or less hardcoded code. How can I make the code dynamic? Perhaps with while(isset($array[$key])) { ... }? Or, with an extra function? Like this: $out = my_extra_traverse_function($array,$key); function array_traverse($array,$key=NULL) { $out = (string) $key; $out = $array[$key] . "/" . $out; $key = $array[$key]; $out = $array[$key] ? $array[$key] . "/" . $out : ""; $key = $array[$key]; $out = $array[$key] ? $array[$key] . "/" . $out : ""; $key = $array[$key]; $out = $array[$key] ? $array[$key] . "/" . $out : ""; return $out; } $a = Array(102=>101, 103=>102, 105=>107, 109=>105, 111=>109, 104=>111); echo array_traverse($a,104); Output: 107/105/109/111/104

    Read the article

  • Returning object from function

    - by brainydexter
    I am really confused now on how and which method to use to return object from a function. I want some feedback on the solutions for the given requirements. Scenario A: The returned object is to be stored in a variable which need not be modified during its lifetime. Thus, const Foo SomeClass::GetFoo() { return Foo(); } invoked as: someMethod() { const Foo& l_Foo = someClassPInstance->GetFoo(); //... } Scneraio B: The returned object is to be stored in a variable which will be modified during its lifetime. Thus, void SomeClass::GetFoo(Foo& a_Foo_ref) { a_Foo_ref = Foo(); } invoked as: someMethod() { Foo l_Foo; someClassPInstance-GetFoo(l_Foo); //... } I have one question here: Lets say that Foo cannot have a default constructor. Then how would you deal with that in this situation, since we cant write this anymore: Foo l_Foo Scenario C: Foo SomeClass::GetFoo() { return Foo(); } invoked as: someMethod() { Foo l_Foo = someClassPInstance->GetFoo(); //... } I think this is not the recommended approach since it would incur constructing extra temporaries. What do you think ? Also, do you recommend a better way to handle this instead ?

    Read the article

  • C++ operator new, object versions, and the allocation sizes

    - by mizubasho
    Hi. I have a question about different versions of an object, their sizes, and allocation. The platform is Solaris 8 (and higher). Let's say we have programs A, B, and C that all link to a shared library D. Some class is defined in the library D, let's call it 'classD', and assume the size is 100 bytes. Now, we want to add a few members to classD for the next version of program A, without affecting existing binaries B or C. The new size will be, say, 120 bytes. We want program A to use the new definition of classD (120 bytes), while programs B and C continue to use the old definition of classD (100 bytes). A, B, and C all use the operator "new" to create instances of D. The question is, when does the operator "new" know the amount of memory to allocate? Compile time or run time? One thing I am afraid of is, programs B and C expect classD to be and alloate 100 bytes whereas the new shared library D requires 120 bytes for classD, and this inconsistency may cause memory corruption in programs B and C if I link them with the new library D. In other words, the area for extra 20 bytes that the new classD require may be allocated to some other variables by program B and C. Is this assumption correct? Thanks for your help.

    Read the article

  • How to create custom omniauth provider (how to return data)

    - by user2803917
    I searched all around the net, how to create a custom provider for omniauth.. and i succedded partly.. I created a gem, and it worked perfectly, except the part, that i cant understand how to return the gathered data to sessions controller, like other providers do.. here is the code in auth gem: require 'multi_json' require 'digest/md5' require 'rest-client' module OmniAuth module Strategies class Providername < OmniAuth::Strategies::OAuth attr_accessor :app_id, :api_key, :auth def initialize(app, app_id = nil, api_key = nil, options = {}) super(app, :providername) @app_id = app_id @api_key = api_key end protected def request_phase redirect "http://valid_url" end def callback_phase if request.params['code'] && request.params['status'] == 'ok' response = RestClient.get("http://valid_url2/?code=#{request.params['auth_code']}") auth = MultiJson.decode(response.to_s) unless auth['error'] @auth_data = auth if @auth_data @return_data = OmniAuth::Utils.deep_merge(super, { 'uid' => @auth_data['uid'], 'nickname' => @auth_data['nick'], 'user_info' => { 'first_name' => @auth_data['name'], 'last_name' => @auth_data['surname'], 'location' => @auth_data['place'], }, 'credentials' => { 'apikey' => @auth_data['apikey'] }, 'extra' => {'user_hash' => @auth_data} }) end end else fail!(:invalid_request) end rescue Exception => e fail!(:invalid_response, e) end end end end and here i call it in my initializers: Rails.application.config.middleware.use OmniAuth::Builder do provider "providername", Settings.providers.providername.app_id, Settings.providers.providername.app_secret end in this code, everything works fine so far, the provider gets called, i get the info from provider, i create a hash (@auth_data) with info, but how do i return it

    Read the article

  • C# How can I return my base class in a webservice

    - by HenriM
    I have a class Car and a derived SportsCar: Car Something like this: public class Car { public int TopSpeed{ get; set; } } public class SportsCar : Car { public string GirlFriend { get; set; } } I have a webservice with methods returning Cars i.e: [WebMethod] public Car GetCar() { return new Car() { TopSpeed = 100 }; } It returns: <Car> <TopSpeed>100</TopSpeed> </Car> I have another method that also returns cars like this: [WebMethod] public Car GetMyCar() { Car mycar = new SportsCar() { GirlFriend = "JLo", TopSpeed = 300 }; return mycar; } It compiles fine and everything, but when invoking it I get: System.InvalidOperationException: There was an error generating the XML document. --- System.InvalidOperationException: The type wsBaseDerived.SportsCar was not expected. Use the XmlInclude or SoapInclude attribute to specify types that are not known statically. I find it strange that it can't serialize this as a straight car, as mycar is a car. Adding XmlInclude on the WebMethod of ourse removes the error: [WebMethod] [XmlInclude(typeof(SportsCar))] public Car GetMyCar() { Car mycar = new SportsCar() { GirlFriend = "JLo", TopSpeed = 300 }; return mycar; } and it now returns: <Car xsi:type="SportsCar"> <TopSpeed>300</TopSpeed> <GirlFriend>JLo</GirlFriend> </Car> But I really want the base class returned, without the extra properties etc from the derived class. Is that at all possible without creating mappers etc? Please say yes ;)

    Read the article

  • PHP/MySQL - Working with two databases, one shared and one local to an instance of application

    - by Extrakun
    The situation: Using a off-the-shelf PHP application, I have to add in a new module for extra functionality. Today, it is made known that eventually four different instances of the application are to be deployed, but the data from the new functionality is to be shared among those 4 instances. Each instance should still have their own database for users, content and etc. So the data for the new functionality goes into a 'shared' database. The data for the application (user login, content, uploads) go into a 'local' database To make things more complex, the new module I am writing will fetch data from the local DB and the shared DB at the same time. A re-write of the base application will take too long. I only have control over the new module which I am writing. The ideal solution: Is there a way to encapsulate 2 databases into one name using MySQL? I do not wish to switch DB connections or specifically name the DB to query from inside my SQL statements. The application uses a DB wrapper, so I am able to change it somehow so I can invisibly attempt to read/write to two different DB. What is the best way to handle this problem?

    Read the article

  • Auto filling polymorphic table on save or on delete in django

    - by Mo J. Mughrabi
    Hi, Am working on an project in which I made an app "core" it will contain some of the reused models across my projects, most of those are polymorphic models (Generic content types) and will be linked to different models. Example below am trying to create audit model and will be linked to several models which may require auditing. This is the polls/models.py from django.db import models from django.contrib.auth.models import User from core.models import * from django.contrib.contenttypes import generic class Poll(models.Model): ## TODO: Document question = models.CharField(max_length=300) question_slug=models.SlugField(editable=False) start_poll_at = models.DateTimeField(null=True) end_poll_at = models.DateTimeField(null=True) is_active = models.BooleanField(default=True) audit_obj=generic.GenericRelation(Audit) def __unicode__(self): return self.question class Choice(models.Model): ## TODO: Document choice = models.CharField(max_length=200) poll=models.ForeignKey(Poll) audit_obj=generic.GenericRelation(Audit) class Vote(models.Model): ## TODO: Document choice=models.ForeignKey(Choice) Ip_Address=models.IPAddressField(editable=False) vote_at=models.DateTimeField("Vote at", editable=False) here is the core/modes.py from django.db import models from django.contrib.auth.models import User from django.contrib.contenttypes.models import ContentType from django.contrib.contenttypes import generic class Audit(models.Model): ## TODO: Document # Polymorphic model using generic relation through DJANGO content type created_at = models.DateTimeField("Created at", auto_now_add=True) created_by = models.ForeignKey(User, db_column="created_by", related_name="%(app_label)s_%(class)s_y+") updated_at = models.DateTimeField("Updated at", auto_now=True) updated_by = models.ForeignKey(User, db_column="updated_by", null=True, blank=True, related_name="%(app_label)s_%(class)s_y+") content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField(unique=True) content_object = generic.GenericForeignKey('content_type', 'object_id') and here is polls/admin.py from django.core.context_processors import request from polls.models import Poll, Choice from core.models import * from django.contrib import admin class ChoiceInline(admin.StackedInline): model = Choice extra = 3 class PollAdmin(admin.ModelAdmin): inlines = [ChoiceInline] admin.site.register(Poll, PollAdmin) Am quite new to django, what am trying to do here, insert a record in audit when a record is inserted in polls and then update that same record when a record is updated in polls.

    Read the article

  • CEIL is one too high for exact integer divisions

    - by Synetech
    This morning I lost a bunch of files, but because the volume they were one was both internally and externally defragmented, all of the information necessary for a 100% recovery is available; I just need to fill in the FAT where required. I wrote a program to do this and tested it on a copy of the FAT that I dumped to a file and it works perfectly except that for a few of the files (17 out of 526), the FAT chain is one single cluster too long, and thus cross-linked with the next file. Fortunately I know exactly what the problem is. I used ceil in my EOF calculation because even a single byte over will require a whole extra cluster: //Cluster is the starting cluster of the file //Size is the size (in bytes) of the file //BPC is the number of bytes per cluster //NumClust is the number of clusters in the file //EOF is the last cluster of the file’s FAT chain DWORD NumClust = ceil( (float)(Size / BPC) ) DWORD EOF = Cluster + NumClust; This algorithm works fine for everything except files whose size happens to be exactly a multiple of the cluster size, in which case they end up being one cluster too much. I thought about it for a while but am at a loss as to a way to do this. It seems like it should be simple but somehow it is surprisingly tricky. What formula would work for files of any size?

    Read the article

  • Return value from Object match

    - by Hito_kun
    I'm, by no means, JS fluent, so forgive me if im asking for some really basic stuff, but I've not being able to find a proper answer to my question. Im writting my first Node.js (plus Extra Framework and Socket.io) app and Im having some fun setting up the server side of a FB-like messenger (surprise!!!). So, let's say I have this data structure to store online users(This is a JSON Array, but I'm not sure it is the best way to do it or should I go with Javascript Objects): [ { "site": 45, "users": [ { "idUser": 5, "idSocket": "qwe87r7w8qwe", "name": "Carlos Ray Norris" }, { "idUser": 6, "idSocket": "v8d9d0fgfs7d", "name": "John Connor" } ] }, { "site": 48, "users": [ { "idUser": 22, "idSocket": "qwe87r7w8qwe", "name": "David Bowie" }, { "idUser": 23, "idSocket": "v8d9d0fgfs7d", "name": "Barack H. Obama" } ] } ] What I want to do is to search in the array for x value given y. In this case, retrieving the idSocket knowing the idUser WITHOUT having to run through the array values. So I have basically 2 questions: first, what would be the proper way to store users online? and secondly, how to find values matching with the values I already know (find the idSocket that has a given idUser). I would like a pure JS approach(or using some of the tools given by Node, Socket.io or Express), but if that's not possible then I can look for some JQuery.

    Read the article

  • Is there a way to parse XML via SAX/DOM with line numbers available per node.

    - by Chris
    I already have written a DOM parser for a large XML document format that contains a number of items that can be used to automatically generate Java code. This is limited to small expressions that are then merged into a dynamically generated Java source file. So far - so good. Everything works. BUT - I wish to be able to embed the line number of the XML node where the Java code was included from (so that if the configuration contains uncompilable code, each method will have a pointer to the source XML document and the line number for ease of debugging). I don't require the line number at parse-time and I don't need to validate the XML Source Document and throw an error at a particular line number. I need to be able to access the line number for each node and attribute in my DOM or per SAX event. Any suggestions on how I might be able to achieve this? P.S. Also, I read the StAX has a method to obtain line number whilst parsing, but ideally I would like to achieve the same result with regular SAX/DOM processing in Java 4/5 rather than become a Java 6+ application or take on extra .jar files.

    Read the article

  • Can I use a static cache Helper method in a NET MVC controller?

    - by Euston
    I realise there have been a few posts regarding where to add a cache check/update and the separation of concerns between the controller, the model and the caching code. There are two great examples that I have tried to work with but being new to MVC I wonder which one is the cleanest and suits the MVC methodology the best? I know you need to take into account DI and unit testing. Example 1 (Helper method with delegate) ...in controller var myObject = CacheDataHelper.Get(thisID, () => WebServiceServiceWrapper.GetMyObjectBythisID(thisID)); Example 2 (check for cache in model class) in controller var myObject = WebServiceServiceWrapper.GetMyObjectBythisID(thisID)); then in model class.............. if (!CacheDataHelper.Get(cachekey, out myObject)) { //do some repository processing // Add obect to cache CacheDataHelper.Add(myObject, cachekey); } Both use a static cache helper class but the first example uses a method signature with a delegate method passed in that has the name of the repository method being called. If the data is not in cache the method is called and the cache helper class handles the adding or updating to the current cache. In the second example the cache check is part of the repository method with an extra line to call the cache helper add method to update the current cache. Due to my lack of experience and knowledge I am not sure which one is best suited to MVC. I like the idea of calling the cache helper with the delegate method name in order to remove any cache code in the repository but I am not sure if using the static method in the controller is ideal? The second example deals with the above but now there is no separation between the caching check and the repository lookup. Perhaps that is not a problem as you know it requires caching anyway?

    Read the article

  • Python's asyncore to periodically send data using a variable timeout. Is there a better way?

    - by Nick Sonneveld
    I wanted to write a server that a client could connect to and receive periodic updates without having to poll. The problem I have experienced with asyncore is that if you do not return true when dispatcher.writable() is called, you have to wait until after the asyncore.loop has timed out (default is 30s). The two ways I have tried to work around this is 1) reduce timeout to a low value or 2) query connections for when they will next update and generate an adequate timeout value. However if you refer to 'Select Law' in 'man 2 select_tut', it states, "You should always try to use select() without a timeout." Is there a better way to do this? Twisted maybe? I wanted to try and avoid extra threads. I'll include the variable timeout example here: #!/usr/bin/python import time import socket import asyncore # in seconds UPDATE_PERIOD = 4.0 class Channel(asyncore.dispatcher): def __init__(self, sock, sck_map): asyncore.dispatcher.__init__(self, sock=sock, map=sck_map) self.last_update = 0.0 # should update immediately self.send_buf = '' self.recv_buf = '' def writable(self): return len(self.send_buf) > 0 def handle_write(self): nbytes = self.send(self.send_buf) self.send_buf = self.send_buf[nbytes:] def handle_read(self): print 'read' print 'recv:', self.recv(4096) def handle_close(self): print 'close' self.close() # added for variable timeout def update(self): if time.time() >= self.next_update(): self.send_buf += 'hello %f\n'%(time.time()) self.last_update = time.time() def next_update(self): return self.last_update + UPDATE_PERIOD class Server(asyncore.dispatcher): def __init__(self, port, sck_map): asyncore.dispatcher.__init__(self, map=sck_map) self.port = port self.sck_map = sck_map self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.bind( ("", port)) self.listen(16) print "listening on port", self.port def handle_accept(self): (conn, addr) = self.accept() Channel(sock=conn, sck_map=self.sck_map) # added for variable timeout def update(self): pass def next_update(self): return None sck_map = {} server = Server(9090, sck_map) while True: next_update = time.time() + 30.0 for c in sck_map.values(): c.update() # <-- fill write buffers n = c.next_update() #print 'n:',n if n is not None: next_update = min(next_update, n) _timeout = max(0.1, next_update - time.time()) asyncore.loop(timeout=_timeout, count=1, map=sck_map)

    Read the article

  • Multiple websites, Single sign-on design

    - by Yannis
    Hi all, I have a question. A client I have been doing some work recently has a range of websites with different login mechanisms. He is looking to slowly migrate to a single sign-on mechanism for his websites (all written in asp.net mvc). I am looking at my options here, so here is a list of requirements: 1) It has to be secure (duh) 2) It needs to support extra user properties over and above the usual name, address stuff (such as money or credits for a user) 3) It has to provide a centralized user management web console for his convenience (I understand that this will be a small project on top of whatever design solution I choose to go for) 4) It has to integrate with the existing websites without re-engineering the whole product (I understand that this depends on the current product implementation). 5) It has to deal with emailing the user when he registers (in order for him to activate his account) 6) It has to deal with activating the user when he clicks the activate me link in the email (I understand that 5 and 6 require some form of email templating system to support different emails per application) I was thinking of creating a library working together with forms authentication that exposes whatever methods are required (e.g. login, logout, activate, etc. and a small restful service to implement activation from email, registration processing etc. Taking into account that loads of things have been left out to make this question brief and to the point, does this sound like a good design? But this looks like a very common problem so arent there any existing projects that I could use? Thanks for reading.

    Read the article

  • C# Late Binding for Parameterized Property

    - by optim
    I'm trying to use late binding to connect to a COM automation API provided by a program called Amibroker, using a C# WinForms project. So far I've been able to connect to everything in the API except one item, which I believe to be a "parameterized property" based on extensive Googling. Here's what the API specification looks like according to the docs (Full version here: http://www.amibroker.com/guide/objects.html): Property Filter(ByVal nType As Integer, ByVal pszCategory As String) As Long [r/w] A javascript snippet to update the value looks like this: AB = new ActiveXObject("Broker.Application"); AA = AB.Analysis; AA.Filter( 0, "market" ) = 0; Using the following C# late-binding code, I can get the value of the property, but I can't for the life of me figure out how to set the value: object[] parameter = new object[2]; parameter[0] = Number; parameter[1] = Type; object filters = _analysis.GetType().InvokeMember("Filter", BindingFlags.GetProperty, null, _analysis, parameter); So far I have tried: using BindingFlags.SetProperty, BindingFlags.SetField casting the returned object to a PropertyInfo object and trying to update the value using it adding extra object containing the value to the parameters object various other things as last-ditch efforts From what I can see, this should be straight-forward, but I'm finding the late binding in C# to be cumbersome at best. The property looks like a method call to me, which is what is throwing me off. How does one assign a value to a method, and what would the prototype for late-binding C# code look like for it? Hopefully that explains it well enough, but feel free to ask if I've left anything unclear. Thanks in advance for any help! Daniel

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >