Search Results

Search found 39473 results on 1579 pages for 'johny why'.

Page 536/1579 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • Django-South introspection rule doesn't work.

    - by Ory Band
    I'm using Django 1.2.3 and South 0.7.3. I am trying to convert my app (named core) to use Django-South. I have a custom model/field that I'm using, named ImageWithThumbsField. It's basically just the ol' django.db.models.ImageField with some attributes such as height, weight, etc. While trying to ./manage.py convert_to_auth core I receieve South's freezing errors. I have no idea why, I'm Probably missing something... I am using a simple custom Model: from django.db.models import ImageField class ImageWithThumbsField(ImageField): def __init__(self, verbose_name=None, name=None, width_field=None, height_field=None, sizes=None, **kwargs): self.verbose_name=verbose_name self.name=name self.width_field=width_field self.height_field=height_field self.sizes = sizes super(ImageField, self).__init__(**kwargs) And this is my introspection rule, which I add to the top of my models.py: from south.modelsinspector import add_introspection_rules from lib.thumbs import ImageWithThumbsField add_introspection_rules( [ ( (ImageWithThumbsField, ), [], { "verbose_name": ["verbose_name", {"default": None}], "name": ["name", {"default": None}], "width_field": ["width_field", {"default": None}], "height_field": ["height_field", {"default": None}], "sizes": ["sizes", {"default": None}], }, ), ], ["^core/.fields/.ImageWithThumbsField",]) This is the errors I receieve: ! Cannot freeze field 'core.additionalmaterialphoto.photo' ! (this field has class lib.thumbs.ImageWithThumbsField) ! Cannot freeze field 'core.material.photo' ! (this field has class lib.thumbs.ImageWithThumbsField) ! Cannot freeze field 'core.material.formulaimage' ! (this field has class lib.thumbs.ImageWithThumbsField) ! South cannot introspect some fields; this is probably because they are custom ! fields. If they worked in 0.6 or below, this is because we have removed the ! models parser (it often broke things). ! To fix this, read http://south.aeracode.org/wiki/MyFieldsDontWork Does anybody know why? What am I doing wrong?

    Read the article

  • Ruby: what is the pitfall in this simple code excerpt that tests variable existence

    - by zipizap
    I'm starting with Ruby, and while making some test samples, I've stumbled against an error in the code that I don't understand why it happens. The code pretends to tests if a variable finn is defined?() and if it is defined, then it increments it. If it isn't defined, then it will define it with value 0 (zero). As the code threw an error, I started to decompose it in small pieces and run it, to better trace where the error was comming from. The code was run in IRB irb 0.9.5(05/04/13), using ruby 1.9.1p378 First I certify that the variable finn is not yet defined, and all is ok: ?> finn NameError: undefined local variable or method `finn' for main:Object from (irb):134 from /home/paulo/.rvm/rubies/ruby-1.9.1-p378/bin/irb:15:in `<main>' >> Then I certify that the following inline-condition executes as expected, and all is ok: ?> ((defined?(finn)) ? (finn+1):(0)) => 0 And now comes the code that throws the error: ?> finn=((defined?(finn)) ? (finn+1):(0)) NoMethodError: undefined method `+' for nil:NilClass from (irb):143 from /home/paulo/.rvm/rubies/ruby-1.9.1-p378/bin/irb:15:in `<main>' I was expecting that the code would not throw any error, and that after executing the variable finn would be defined with a first value of 0 (zero). But instead, the code thows the error, and finn get defined but with a value of nil. >> finn => nil Where might the error come from?!? Why does the inline-condition work alone, but not when used for the finn assignment? Any help apreciated :)

    Read the article

  • Any book on designing and implementing a CRPG engine?

    - by Fabzter
    Hi! First, let me tell you, I am not really interested in making my own rpg engine (at least not in the near future, hehe), but I do feel like I want to understand the internals of how a rpg engine works. Why? Well, because I like to read about programming and design, It keeps me motivated and excited, and because I know I will learn a lot, for, even when I have been programming for some years now, I never stop considering myself an ignorant... there are simply SO many things involving a game engine (specially rpg ones, like branching storylines, and items and economics!) I'm eager to know. I've been searching (and thus, finding) lots of info online, but it is never focused in what I'm interested (most of it talks about the mathematics and AI algorithms implementation, which I know quite well), which is the design of overall structure, patterns, scripting engine, decision engine... damn, so many things I can't even imagine, since I've never done any game programming. I hope you know have an idea of how I feel, and how I want to learn for the sake of learning, and why would I want you to tell me if you know if there exist books touching the topics that interest me the most.

    Read the article

  • Javascript: prototypeal inheritance and the prototype proprity

    - by JanD
    Hi, I have a simple code fragment in JS working with prototype inheritance. function object(o) { function F() {} F.prototype = o; return new F(); } //the following code block has a alternate version var mammal={ color: "brown", getColor: function(){ return this.color; } } var myCat = object(mammal); myCat.meow = function(){return "meow";} that worked fine but adding this: mammal.prototype.kindOf = "predator"; does not. ("mammal.prototype is undefined") Since I guessed that object maybe have no prototype I rewrote it, replacing the var mammal={... block with: function mammal(){ this.color="brown"; this.getColor = function(){return this.color;} } which gave me a bunch of other errors: "Function.prototype.toString called on incompatible object" and if I try to call _myCat.getColor() "myCat.getColor is not a function" Now I am totally confused. After reading Crockford, and Flanagan I did not get the solution for the errors. So it would be great if somebody knows... - why is the prototype undefined in the first example (which is foremost concern; I thought the prototype of explicitly set in the object() function) - why get I these strange errors trying to use the mammal function as prototype object in the object() function?

    Read the article

  • How to keep my topmost window on top?

    - by Misko Mare
    I will first explain why I need it, because I anticipate that the first response will be "Why do you need it?". I want to detect when the mouse cursor is on an edge of the screen and I don't want to use hooks. Hence, I created one pixel wide TOPMOST invisible window. I am using C++ on Win XP, so when the window is created (CreateWindowEx(WS_EX_TOPMOST | WS_EX_TRANSPARENT ...) everything works fine. Unfortunately, if a user moves another topmost window, for example the taskbar over my window, I don't get mouse movements. I tried to solve this similarly to approaches suggested in: How To Keep an MDI Window Always on Top I tried to check for Z-order of my topmost window in WM_WINDOWPOSCHANGED first with case WM_WINDOWPOSCHANGED : WINDOWPOS* pWP = (WINDOWPOS*)lParam; yet pWP-hwnd points to my window and pWP-hwndInsertAfter is 0, which should mean that my window is on the top of the Z, even though it is covered with the taskbar. Then I tried: case WM_WINDOWPOSCHANGED : HWND topWndHndl = GetNextWindow(myHandle, GW_HWNDPREV) GetWindowText(topWndHndl, pszMem, cTxtLen + 1); and I'll always get that the "Default IME" window is on top of my window. Even if try to bring my window to the top with SetWindowPos() or BringWindowToTop (), "Default IME" stays on the top. I don't know what is "Default IME" and how to detect if the taskbar is on top of my window. So my question is: How to detect that my topmost window is not the top topmost window anymore and how to keep it on the top? P.S. I know that a "brute force" approach of periodically bringing my window to the top works, yet is ugly and could have some unwanted inference with the notification window for example. (Bringing my window to the top will hide the notification window.) Thank you on your time and suggestions!

    Read the article

  • Rails 2.3.2 trying to render ERB instead of HAML

    - by c00lryguy
    Rails is suddenly trying to render ERB instead of Haml and I can't figure out why. I've created new rails projects, reinstalled Haml, and reinstalled Rails. Here's exactly the steps I take when making my application (Rails 2.3.2): rails> rails test rails> cd test rails\test> haml --rails . rails\test> ruby script\generate model user email:string password:string rails\test> ruby script\generate controller users index rails\test> rake db:migrate Here's what the UsersController looks like: class UsersController < ApplicationController def index @users = User.all end end My routes: ActionController::Routing::Routes.draw do |map| map.resources :users end I now create views\users\index.html.haml: %table %th(style="text-align: left;") %h1 Users - for user in @users %tr %td= user.email %td= user.password Annnd run the server... I navigate to localhost:3000\users and I get this error message: Template is missing Missing template users/index.erb in view path app/views For some reason Rails is trying to find and render .erb files instead of .haml files. vendor\plugins\haml\init.rb exists, untouched. I've reinstalled Haml (Pretty Penny) multiple times and still get the same results. I've also tried adding config.gem 'haml' to my environment.rb but this also doesn't work. I can't figure out why suddenly rails will not render haml for me.

    Read the article

  • encodeURIComponent is really useful?

    - by Marco Demaio
    Something I still don't understand when perfoming an http-get request to the server is what the advantage is in using JS fucntion encodeURIcomponent to encode each component of the http-get. Doing some tests I saw the server (using PHP) gets the values of the http-get request properly also if I don't use encodeURIcomponent!!! Obviuosly I still need to encode at client level the special character & ? = / : otherwise an http-get value like this "peace&love=virtue" would be considered as new key value pair of the http-get request instead of a one single value. But why does encodeURIcompenent encodes also many other charcaters like 'è' for example wich is translated into %C3%A8 that must be decoded on a PHP server using the utf8_decode function. By using encodeURIcomponent all values of the http-get request are utf8 encoded, therefor when getting them in PHP I have to call each time the utf8_decode function on each $_GET value which is quite annoying. Why can't we just encode only the & ? = / : charcaters??? see also: http://stackoverflow.com/questions/2607946/js-encodeuricomponent-result-different-from-the-one-created-by-form It shows that encodeURIComponent does not even encode properly because a simple browser FORM GET encodes charactrs like '€', in different way. So I still wonder what does this encodeURIComponent is for?

    Read the article

  • Javascript: prototypal inheritance and the prototype property

    - by JanD
    Hi, I have a simple code fragment in JS working with prototype inheritance. function object(o) { function F() {} F.prototype = o; return new F(); } //the following code block has a alternate version var mammal = { color: "brown", getColor: function() { return this.color; } } var myCat = object(mammal); myCat.meow = function(){return "meow";} that worked fine but adding this: mammal.prototype.kindOf = "predator"; does not. ("mammal.prototype is undefined") Since I guessed that object maybe have no prototype I rewrote it, replacing the var mammal={... block with: function mammal() { this.color = "brown"; this.getColor = function() { return this.color; } } which gave me a bunch of other errors: "Function.prototype.toString called on incompatible object" and if I try to call _myCat.getColor() "myCat.getColor is not a function" Now I am totally confused. After reading Crockford, and Flanagan I did not get the solution for the errors. So it would be great if somebody knows... - why is the prototype undefined in the first example (which is foremost concern; I thought the prototype of explicitly set in the object() function) - why get I these strange errors trying to use the mammal function as prototype object in the object() function? Edit by the Creator of the Question: These two links helped a lot too: Prototypes_in_JavaScript on the spheredev wiki explains the way the prototype property works relativily simple. What it lacks is some try-out code examples. Some good examples are provided by Morris John's Article. I personally find the explanations are not that easy as in the first link, but still very good. The most difficult part even after I actually got it is really not to confuse the .prototype propery with the internal [[Prototype]] of an object.

    Read the article

  • Getting Argument Names In Ruby Reflection

    - by Joe Soul-bringer
    I would like to do some fairly heavy-duty reflection in the Ruby programming language. I would like to create a function which would return the names of the arguments of various calling functions higher up the call stack (just one higher would be enough but why stop there?). I could use Kernel.caller go to the file and parse the argument list but that would be ugly and unreliable. The function that I would like would work in the following way: module A def method1( tuti, fruity) foo end def method2(bim, bam, boom) foo end def foo print caller_args[1].join(",") #the "1" mean one step up the call stack end end A.method1 #prints "tuti,fruity" A.method2 #prints "bim, bam, boom" I would not mind using ParseTree or some similar tool for this task but looking at Parsetree, it is not obvious how to use it for this purpose. Creating a C extension like this is another possibility but it would be nice if someone had already done it for me. Edit2: I can see that I'll probably need some kind of C extension. I suppose that means my question is what combination of C extension would work most easily. I don't think caller+ParseTree would be enough by themselves. As far as why I would like to do this goes, rather than saying "automatic debugging", perhaps I should say that I would like to use this functionality to do automatic checking of the calling and return conditions of functions. Say def add x, y check_positive return x + y end Where check_positive would throw an exception if x and y weren't positive (obviously, there would be more to it than that but hopefully this gives enough motivation)

    Read the article

  • NSUndoManager with Core Data - Redo not working

    - by CJ
    I have a Core Data document-based app which support undo/redo via the built-in NSUndoManager associated with the NSManagedObjectContext. I have a few actions set up which perform numerous tasks within Core Data, wrap all these tasks into an undo group via beginUndoGrouping/endUndoGrouping, and are processed by the NSUndoManager. Undo works fine. I can perform several successive actions, and each then undo each one of them successively and my app's state is maintained correctly. However, the "Redo" menu item is never enabled. This means that the NSUndoManager is telling the menu that there are no items to redo. I am wondering why the NSUndoManager is seemingly forgetting about items once they are undone, and not allowing redos to occur? One thing I should mention is that I'm disabling undo registration after a document is opened/created. When I perform an action, I call enableUndoRegistration, beginUndoGrouping, perform the action, then call processPendingChanges, setActionName:, endUndoGrouping, and finally disableUndoRegistration. This makes sure that only specific actions are undoable, and any other data changes I make outside of these go unnoticed to the NSUndoManager. This may be a part of the issue, but if so I'm wondering why it's affecting redo? Thanks in advance.

    Read the article

  • How to customize the content of each page using Page Control and UIScrollView?

    - by viper15
    I have problem with customizing each page using pagecontrol and UIScrollView. I'm customizing Page Control from Apple. Basically I would like to have each page different with text and image alternately on different page. Page 1 will have all text, Page 2 will have just images, Page 3 will have all text and goes on. This is original code: // Set the label and background color when the view has finished loading. - (void)viewDidLoad { pageNumberLabel.text = [NSString stringWithFormat:@"Page %d", pageNumber + 1]; self.view.backgroundColor = [MyViewController pageControlColorWithIndex:pageNumber]; } As you can see, this code shows only Page 1, Page 2 etc as you scroll right. I tried to put in this new code but that didn't make any difference. There's no error. I know this is pretty simple code. I don't why it doesn't work. I declare pageText as UILabel. // Set the label and background color when the view has finished loading. - (void)viewDidLoad { pageNumberLabel.text = [NSString stringWithFormat:@"Page %d", pageNumber + 1]; self.view.backgroundColor = [MyViewController pageControlColorWithIndex:pageNumber]; if (pageNumber == 1) { pageText.text = @"Text in page 1"; } if (pageNumber == 2) { pageText.text = @"Image in page 2"; } if (pageNumber == 3) { pageText.text = @"Text in page 3"; } } I don't know why it doesn't work. Also if you have better way to do it, let me know. Thanks.

    Read the article

  • FaceBook UIAlertView problem

    - by william-hu
    My iPhone application connects to facebook . After i log in ,then appear one button named "Add feed to your wall". If i click it , pop a UIAlertView which ask "Yes" or "no". If "Yes", show the FBStreamDialog. But the BFStreamDialog just flash, then disappear. I don't know why. this is my code. First , click the button " Add feed to your wall". call changeFeed: function. -(IBAction) changeFeed { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Sporting Summer Festival Monte-Carlo" message:@"Are you attending this concert?" delegate:self cancelButtonTitle:@"No" otherButtonTitles:@"Yes",nil]; [alert show]; alert.tag = 1; self.alertView =alert; [alert release]; } Then, if you choose "YES" button. Call this function: - (void)alertView:(UIAlertView *)alertView clickedButtonAtIndex:(NSInteger)buttonIndex { if (buttonIndex == 0) { NSLog(@"NO"); } else { NSLog(@"YES"); [self showAddFeed]; } } And this is the showAddFeed function , which is defined in front of "clickButtonAtIndex". -(void)showAddFeed { FBStreamDialog dialog = [[[FBStreamDialog alloc] init] autorelease]; dialog.delegate= self; dialog.userMessagePrompt = @""; [dialog show]; } Just cant work well. I don't know why? Thank you for your help.

    Read the article

  • Query gives an unsorted result set when run from stored procedure using CTE

    - by irtizaur
    I am trying to create a paging query using CTE. It works fine when I execute it from Microsoft SQL Server Management Studio Query Editor. And the result set is perfectly sorted as I want. But when I modify it for a stored procedure it gives me a unsorted result and I don't have any clue why. Here is my Query, with items as ( select ROW_NUMBER() over (order by create_time desc) number , i.item_name item_name , i.create_time create_time , c.category_name category_name , i.category_id category_id from cb_item i, cb_category c where i.category_id = c.category_id and c.category_id = '4E5248FE-05DD-4D01-ABBB-80C6E3BA5CDA' ) select item_name , create_time , category_name , category_id from items where number between 1 and 25 And this is the Stored Procedure Version, create procedure ItemPage @category_id uniqueidentifier , @from int , @to int , @sortby nvarchar(50) as begin with items as ( select ROW_NUMBER() over (order by @sortby) number , i.item_name item_name , i.create_time create_time , c.category_name category_name , i.category_id category_id from cb_item i, cb_category c where i.category_id = c.category_id and c.category_id = @category_id ) select item_name , create_time , category_name , category_id from items where number between @from and @to end exec itempage '4E5248FE-05DD-4D01-ABBB-80C6E3BA5CDA' , 1, 25, 'create_time desc' The first one gives me sorted result but procedure gives me unsorted result. I don't know why?

    Read the article

  • SQL GUID Vs Integer

    - by Dal
    Hi I have recently started a new job and noticed that all the SQL tables use the GUID data type for the primary key. In my previous job we used integers (Auto-Increment) for the primary key and it was a lot more easier to work with in my opinion. For example, say you had two related tables; Product and ProductType - I could easily cross check the 'ProductTypeID' column of both tables for a particular row to quickly map the data in my head because its easy to store the number (2,4,45 etc) as opposed to (E75B92A3-3299-4407-A913-C5CA196B3CAB). The extra frustration comes from me wanting to understand how the tables are related, sadly there is no Database diagram :( A lot of people say that GUID's are better because you can define the unique identifer in your C# code for example using NewID() without requiring SQL SERVER to do it - this also allows you to know provisionally what the ID will be.... but I've seen that it is possible to still retrieve the 'next auto-incremented integer' too. A DBA contractor reported that our queries could be up to 30% faster if we used the Integer type instead of GUIDS... Why does the GUID data type exist, what advantages does it really provide?... Even if its a choice by some professional there must be some good reasons as to why its implemented?

    Read the article

  • environment change in rake task

    - by Mellon
    I am developing Rails v2.3 app with MySQL database and mysql2 gem. I faced a weird situation which is about changing the environment in rake task. (all my setting and configurations for environment and database are correct, no problem for that.) Here is my simple story : I have a rake task like following: namespace :db do task :do_something => :environment do #1. run under 'development' environment my_helper.run_under_development_env #2. change to 'custom' environment RAILS_ENV='custom' Rake::Task['db:create'] Rake::Task['db:migrate'] #3. change back to 'development' environment RAILS_ENV='development' #4. But it still run in 'customer' environment, why? my_helper.run_under_development_env end end The rake task is quite simple, what it does is: 1. Firstly, run a method from my_helper under "development" environment 2. Then, change to "custom" environment and run db:create and db:migrate until now, everything is fine, the environment did change to "custom" 3. Then, change it back again to "development" environment 4. run helper method again under "development" environment But, though I have changed the environment back to "development" in step 3, the last method still run in "custom" environment, why? and how to get rid of it? --- P.S. --- I have also checked a post with the similar situation here, and tried to use the solution there like (in step 2): ActiveRecord::Base.establish_connection('custom') Rake::Task['db:create'] Rake::Task['db:migrate'] to change the database connection instead of changing environment but, the db:create and db:migrate will still run under "development" database, though the linked post said it should run for "custom" database... weird

    Read the article

  • Python import error: Symbol not found, but the symbol is present in the file

    - by Autopulated
    I get this error when I try to import ssrc.spread: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/ssrc/_spread.so, 2): Symbol not found: __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE The file in question (_spread.so) includes the symbol: $ nm _spread.so | grep _ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE U __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE U __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE (twice because the file is a fat ppc/x86 binary) The archive header information of _spread.so is: $ otool -fahv _spread.so Fat headers fat_magic FAT_MAGIC nfat_arch 2 architecture ppc7400 cputype CPU_TYPE_POWERPC cpusubtype CPU_SUBTYPE_POWERPC_7400 capabilities 0x0 offset 4096 size 235272 align 2^12 (4096) architecture i386 cputype CPU_TYPE_I386 cpusubtype CPU_SUBTYPE_I386_ALL capabilities 0x0 offset 241664 size 229360 align 2^12 (4096) /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/ssrc/_spread.so (architecture ppc7400): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC PPC ppc7400 0x00 BUNDLE 10 1420 NOUNDEFS DYLDLINK BINDATLOAD TWOLEVEL WEAK_DEFINES BINDS_TO_WEAK /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/ssrc/_spread.so (architecture i386): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC I386 ALL 0x00 BUNDLE 11 1604 NOUNDEFS DYLDLINK BINDATLOAD TWOLEVEL WEAK_DEFINES BINDS_TO_WEAK And my python is python 2.6.4: $ which python | xargs otool -fahv Fat headers fat_magic FAT_MAGIC nfat_arch 2 architecture ppc cputype CPU_TYPE_POWERPC cpusubtype CPU_SUBTYPE_POWERPC_ALL capabilities 0x0 offset 4096 size 9648 align 2^12 (4096) architecture i386 cputype CPU_TYPE_I386 cpusubtype CPU_SUBTYPE_I386_ALL capabilities 0x0 offset 16384 size 13176 align 2^12 (4096) /Library/Frameworks/Python.framework/Versions/2.6/bin/python (architecture ppc): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC PPC ALL 0x00 EXECUTE 11 1268 NOUNDEFS DYLDLINK TWOLEVEL /Library/Frameworks/Python.framework/Versions/2.6/bin/python (architecture i386): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC I386 ALL 0x00 EXECUTE 11 1044 NOUNDEFS DYLDLINK TWOLEVEL There seems to be a difference in the ppc architecture in the files, but I'm running on an intel, so I don't see why this should cause a problem. So why might the symbol not be found?

    Read the article

  • Advantages/disadvantages of Python and Ruby

    - by Seburdis
    I know this is going to seem a little like all the other python vs ruby question out there, but I'm not looking specifically to pick one over the other all the time. My question is, essentially, why would you use one language over the other when you are starting a new project? What features does ruby have that python doesn't that would make you decide on it for a given project? What about python over ruby? I was just recently thinking about the differentiation between the two languages because of Jamis Buck's "There is no magic, only awesome" series of articles (4 parts, available here) when I realized I really don't know enough about the two languages to know when to choose one over the other. I'm hoping to get objective answers from people who have experience with both languages, rather than just "python is better, ruby sucks" kind of responses. If you know of a feature in one language that doesn't exist in the other and is great in a certain situation, feel free to chime in and say why you think it's awesome. If you have another language comparable to these that you'd like to suggest pros/cons for, like groovy for example, that would be appreciated too. Some thing I know each language has going for it: Ruby: Awesome metaprogramming Great community Wide selection of Gems Rails Great code readability, usually MacRuby is great for native development on Mac without objc Amazing testing tools (cucumber, rspec, shoulda, autotest, etc.) Python: Whitespace indentation List comprehensions Better functional programming support? Lots of support on linux Easy_install isn't far from gems Great variety of libraries available

    Read the article

  • Website --> Add Reference also creates files with pdb extensions

    - by SourceC
    Hello, Q1 Any assemblies stored in Bin directory will automatically be referenced by web application. We can add assembly reference via Website -- Add Reference or simply by copying dll into Bin folder. But I noticed that when we add reference via Website -- Add Reference, that additional files with .pdb extension are placed inside Bin. If these files are also needed, then why does reference still work even if we only place referenced dll into Bin, but not pdb files Q2 It appears that if you add a new item to web project, this class will automatically be added to project list and we can reference it from all pages in this project. So are all files added to project list automatically being referenced ? thanx EDIT: On your second question, you are adding a public class to a namespace so will be visible to other classes in that project and in that namespace. I don’t know much about assemblies, but I’d assume the reason why item( class ) added to the project is visible to other classes in that project is for the simple fact that in web project all classes get compiled into single assembly and for the fact that public classes contained in the same assembly are always visible to each other? much appreciated

    Read the article

  • Merging k sorted linked lists - analysis

    - by Kotti
    Hi! I am thinking about different solutions for one problem. Assume we have K sorted linked lists and we are merging them into one. All these lists together have N elements. The well known solution is to use priority queue and pop / push first elements from every lists and I can understand why it takes O(N log K) time. But let's take a look at another approach. Suppose we have some MERGE_LISTS(LIST1, LIST2) procedure, that merges two sorted lists and it would take O(T1 + T2) time, where T1 and T2 stand for LIST1 and LIST2 sizes. What we do now generally means pairing these lists and merging them pair-by-pair (if the number is odd, last list, for example, could be ignored at first steps). This generally means we have to make the following "tree" of merge operations: N1, N2, N3... stand for LIST1, LIST2, LIST3 sizes O(N1 + N2) + O(N3 + N4) + O(N5 + N6) + ... O(N1 + N2 + N3 + N4) + O(N5 + N6 + N7 + N8) + ... O(N1 + N2 + N3 + N4 + .... + NK) It looks obvious that there will be log(K) of these rows, each of them implementing O(N) operations, so time for MERGE(LIST1, LIST2, ... , LISTK) operation would actually equal O(N log K). My friend told me (two days ago) it would take O(K N) time. So, the question is - did I f%ck up somewhere or is he actually wrong about this? And if I am right, why doesn't this 'divide&conquer' approach can't be used instead of priority queue approach?

    Read the article

  • Apache mod_rewrite RewriteCond to by-pass static resources not working

    - by d11wtq
    I can't for the life of me fathom out why this RewriteCond is causing every request to be sent to my FastCGI application when it should in fact be letting Apache serve up the static resources. I've added a hello.txt file to my DocumentRoot to demonstrate. The text file: ls /Sites/CioccolataTest.webapp/Contents/Resources/static hello.txt` The VirtualHost and it's rewrite rules: AppClass /Sites/CioccolataTest.webapp/Contents/MacOS/CioccolataTest -port 5065 FastCgiExternalServer /Sites/CioccolataTest.webapp/Contents/MacOS/CioccolataTest.fcgi -host 127.0.0.1:5065 <VirtualHost *:80> ServerName cioccolata-test.webdev DocumentRoot /Sites/CioccolataTest.webapp/Contents/Resources/static RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ /Sites/CioccolataTest.webapp/Contents/MacOS/CioccolataTest.fcgi/$1 [QSA,L] </VirtualHost> Even with the -f, Apache is directing requests for this text file (i.e. access http://cioccolata-test.webdev/hello.txt returns my app, not the text file). As a proof of concept I changed the RewriteCond to: RewriteCond %{REQUEST_URI} !^/hello.txt That made it serve the text file correctly and allowed every other request to hit the FastCGI application. Why doesn't my original version work? I need to tell apache to serve every file in the DocumentRoot as a static resource, but if the file doesn't exist it should rewrite the request to my FastCGI application. NOTE: The running FastCGI application is at /Sites/CioccolataTest.webapp/Contents/MacOS/CioccolataTest (without the .fcgi prefix)... the .fcgi prefix is only being used to tell the fastcgi module to direct the request to the app.

    Read the article

  • TCP: Address already in use exception - possible causes for client port? NO PORT EXHAUSTION

    - by TomTom
    Hello, stupid problem. I get those from a client connecting to a server. Sadly, the setup is complicated making debugging complex - and we run out of options. The environment: *Client/Server system, both running on the same machine. The client is actually a service doing some database manipulation at specific times. * The cnonection comes from C# going through OleDb to an EasySoft JDBC driver to a custom written JDBC server that then hosts logic in C++. Yeah, compelx - but the third party supplier decided to expose the extension mechanisms for their server through a JDBC interface. Not a lot can be done here ;) The Symptom: At (ir)regular intervals we get a "Address already in use: connect" told from the JDBC driver. They seem to come from one particular service we run. Now, I did read all the stuff about port exhaustion. This is why we have a little tool running now that counts ports and their states every minute. Last time this happened, we had an astonishing 370 ports in use, with the count rising to about 900 AFTER the error. We aleady patched the registry (it is a windows machine) to allow more than the 5000 client ports standard, but even then, we are far far from that limit to start with. Which is why I am asking here. Ayneone an ide what ELSE could cause this? It is a Windows 2003 Server machine, 64 bit. The only other thing I can see that may cause it (but this functionality is supposedly disabled) is Symantec Endpoint Protection that is installed on the server - and being capable of actinc as a firewall, it could possibly intercept network traffic. I dont want to open a can of worms by pointing to Symantec prematurely (if pointing to Symantec can ever be seen as such). So, anyone an idea what else may be the cause? Thanks

    Read the article

  • How to define index by several columns in hibernate entity?

    - by foobar
    Morning. I need to add indexing in hibernate entity. As I know it is possible to do using @Index annotation to specify index for separate column but I need an index for several fields of entity. I've googled and found jboss annotation @Table, that allows to do this (by specification). But (I don't know why) this functionality doesn't work. May be jboss version is lower than necessary, or maybe I don't understant how to use this annotation, but... complex index is not created. Why index may not be created? jboss version 4.2.3.GA Entity example: package somepackage; import org.hibernate.annotations.Index; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; @Entity @org.hibernate.annotations.Table(appliesTo = House.TABLE_NAME, indexes = { @Index(name = "IDX_XDN_DFN", columnNames = {House.XDN, House.DFN} ) } ) public class House { public final static String TABLE_NAME = "house"; public final static String XDN = "xdn"; public final static String DFN = "dfn"; @Id @GeneratedValue private long Id; @Column(name = XDN) private long xdn; @Column(name = DFN) private long dfn; @Column private String address; public long getId() { return Id; } public void setId(long id) { this.Id = id; } public long getXdn() { return xdn; } public void setXdn(long xdn) { this.xdn = xdn; } public long getDfn() { return dfn; } public void setDfn(long dfn) { this.dfn = dfn; } public String getAddress() { return address; } public void setAddress(String address) { this.address = address; } } When jboss/hibernate tries to create table "house" it throws following exception: Reason: org.hibernate.AnnotationException: @org.hibernate.annotations.Table references an unknown table: house

    Read the article

  • loaded resources looks ugly

    - by Xaver
    I have the TreeView class using in my project. I use icons for it.First i load icons so: ImageList^ il = gcnew ImageList(); il->Images->Add(Image::FromFile("DISK.ico")); il->Images->Add(Image::FromFile("FILE.ico")); il->Images->Add(Image::FromFile("FOLDER.ico")); treeView1->ImageList = il; All was good. But i dont like that if i delete my icons from directory of project. there is error in my project. I decide to add icons in .resx file. Now icons loading look so: ImageList^ il = gcnew ImageList(); Resources::ResourceManager^ resourceManager = gcnew Resources::ResourceManager ("FilesSaver.Form1", GetType()->Assembly); Object^ disk = resourceManager->GetObject("DISK"); il->Images->Add(reinterpret_cast<Image^>(disk)); Object^ file = resourceManager->GetObject("FILE"); il->Images->Add(reinterpret_cast<Image^>(file)); Object^ folder = resourceManager->GetObject("FOLDER"); il->Images->Add(reinterpret_cast<Image^>(folder)); treeView1->ImageList = il; And why icons in the TreeView looks ugly (they look lighter and have a big black border). Why did this happen?

    Read the article

  • DIVS over flash movies in Internet Explorer

    - by drew
    The age old question... why the hell doesn't a div positioned over a flash object stay on top with z-index. I have found the answer in the past, but it's been so long, I can't seem to get it. My flash movie is in a div floating left: <div id="flash"> <object width="614" height="289"> <param name="movie" value="images/75.swf"> <param name="wmode" value="transparent"> <embed src="images/75.swf" width="614" height="289" wmode"transparent"> </embed> </object> </div> My css for the div that needs to be on top is: .menu ul li:hover ul li a:hover { background:#5a3f2d; color:#FFF; z-index: 9999; I cannot get it to show above the flash movie in ie6 or ie8. I know this is old school but I'm frustrated! Does my nav div need to have an absolute position? Is that why it doesn't work? Example is here. Hover over the first link on the right: "CUSTOMER SERVICE" Thanks all :)

    Read the article

  • Clarification needed: How does .NET runtime resolve assembly references from parent folder?

    - by aoven
    I have the following output structure of executables in my solution: %ProgramFiles% | +-[MyAppName] | +-[Client] | | | +-(EXE & several DLL assemblies) | +-[Common] | | | +-[Schema Assemblies] | | | | | +-(several DLL assemblies) | | | +-(several DLL assemblies) | +-[Server] | +-(EXE & several DLL assemblies) Each project in solution references different DLL assemblies, some of which are outputs from other projects in solution, and others are plain 3rd-party assemblies. For example, [Client] EXE might reference an assembly in [Common], which is in a different directory branch. All references have "Copy Local" set to false, to mirror the layout of the files in the final installed application. Now, if I take a look at reference properties in the Visual Studio IDE, I see that "Path" of every reference is absolute and that it corresponds to the actual output location of the assembly. That's understandable and correct. As expected, solution compiles and runs just fine. What I don't understand is, why everything seems to work even when I close the IDE, rename the [MyAppName] directory and run the [Client] EXE manually? How does the runtime find the assemblies if the reference paths aren't the same as they were at the time of linking? To be clear - this is actually exactly what I'm after: a semi-dispersed set of application files that run fine regardless of where the [MyAppName] directory is located or even what it's named. I'd just like to know, how and why this works without any specific path resolution on my part. I've read the answers to this similar question, but I still don't get it. Help much appreciated!

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >