Search Results

Search found 17782 results on 712 pages for 'questions and answers'.

Page 618/712 | < Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >

  • Wordpress: How to override all default theme CSS so your custom one is loaded the last?

    - by mickael
    I have a problem where I've been able to include a custom css in the section of my wordpress theme with the following code: function load_my_style_wp_enqueue_scripts() { wp_register_style('my_styles_css', includes_url("/css/my_styles.css")); wp_enqueue_style('my_styles_css'); } add_action('wp_enqueue_scripts','load_my_style_wp_enqueue_scripts'); But the order in the source code is as follows: <link rel='stylesheet' id='my_styles_css-css' href='http://...folderA.../my_styles.css?ver=3.1' type='text/css' media='all' /> <link rel="stylesheet" id="default-css" href="http://...folderB.../default.css" type="text/css" media="screen,projection" /> <link rel="stylesheet" id="user-css" href="http://...folderC.../user.css" type="text/css" media="screen,projection" /> I want my_styles_css to be the last file to load, overriding default and user files. I've tried to achieve this by modifying wp_enqueue_style in different ways, but without any success. I've tried: wp_enqueue_style('my_styles_css', array('default','user')); or wp_enqueue_style('my_styles_css', false, array('default','user'), '1.0', 'all'); I've seen some related questions without answer or with these last 2 methods that are still failing for me. The function above is part of a plugin that I've got enabled in my wordpress installation.

    Read the article

  • Doctrine Default Primary Key Problem (Again)

    - by 01010011
    Hi, Should I change all of my uniquely-named MySQL database primary keys to 'id' to avoid getting errors related to Doctrine's default primary key set in the plugin 'doctrine_pi.php'? To further elaborate, I am getting the following reoccurring error, this time after trying to login to my login page: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'u.book_id' in 'field list'' in... I suspect the problem resides at a MySQL table used for my login, of which has a primary key called id Marc B originally solved an identical problem for me in this post http://stackoverflow.com/questions/2702229/doctrine-codeigniter-mysql-crud-errors when I had the same problem with a different table within the same database. Following his suggestion, I changed the default primary key located at system/application/plugins/doctrine_pi.php from 'id' to 'book_id': <?php // system/application/plugins/doctrine_pi.php ... // set the default primary key to be named 'id', integer, 4 bytes Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_DEFAULT_IDENTIFIER_OPTIONS, array('name' => 'book_id', 'type' => 'integer', 'length' => 4)); and that solved my previous problem. However, my login page stopped working. So what is the safe thing to do? Change all of my primary keys to 'id' (will that solve the problem without causing some other problem I am not aware of). Or should I add some lines of code in doctrine_pi.php?

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • Show / Hiding Divs

    - by Rob
    After cobbling together a few questions I've managed to get this far to showing / hiding divs: $(document).ready(function(){ $('.box').hide(); $('#categories').onMouseOver(function() { $('.box').hide(); $('#div' + $(this).val()).show(); }); }); HTML: <div id="categories"> <div id="btn-top20"><a href="">Top 20 Villas</a></div> <div id="btn-villaspec"><a href="">Villa Specials</a></div> <div id="btn-staffpicks"><a href="">Our Staff Picks</a></div> </div> <div id="category-content"> <div id="divarea1" class="box"> Content 1 </div> <div id="divarea2" class="box"> Content 2 </div> <div id="divarea3" class="box"> Content 3 </div> </div> What am I missing?

    Read the article

  • Retrieve OpenId FullName from Google

    - by user294711
    I'm using DotNetOpenAuth lib to work with Google(only) OpenId. And I'm retrieving Email without any problem. But I can't get FullName, it is always null. request.AddExtension(new ClaimsRequest { Email = DemandLevel.Require, FullName = DemandLevel.Require }); ClaimsResponse claimsResponse = relyingPartyResponse.GetExtension<ClaimsResponse>(); if (claimsResponse != null) { var email = claimsResponse.Email; var fullName = claimsResponse.FullName; } I googled this problem and found that: Glad you got it working. Google will not give a full name or nickname for their users. They ONLY give email address, and (I think, but perhaps only on a white list) the timezone. It's not a matter of figuring out how to rig your RP so that it works. Google just won't do it yet. – Andrew Arnott Sep 8 at 14:22 stackoverflow.com/questions/1387438/retrieve-openid-user-information-claims-across-providers But it was in Sep 2009, maybe something was changed from that moment... I've found that in http://code.google.com/apis/accounts/docs/OpenID.html: openid.ax.required -- (required) Specifies the attribute being requested. Valid values include: "country", "email", "firstname", "language", "lastname". To request multiple attributes, set this parameter to a comma-delimited list of attributes. So, my question is how can I get FullName (FirstName, LastName) from Google OpenId?

    Read the article

  • ASP.NET web setup class is not defined

    - by Wayne Werner
    Hi, I've got an ASP.NET application that I installed by creating a web setup. I ran into a problem where ASP.NET wasn't registered with IIS so it gave me a "installation was interrupted" message that told me exactly nothing. Anyhow, I finally got it installed, and I can access the main page, but it's telling me that my class isn't defined. The dll is in the same directory as the Default.aspx page Here's the main error information Compiler Error Message: BC30002: Type 'SIValidator.SIValidator' is not defined. Source Error: Line 4: Line 5: <script runat="server"> Line 6: Dim validator As New SIValidator.SIValidator() Line 7: Protected table As New arrayList() Line 8: Protected countyByDistrict As New Hashtable() Version Information: Microsoft .NET Framework Version:2.0.50727.1873; ASP.NET Version:2.0.50727.1433 Am I doing it wrong? Is there some obscure setting that may not be set? I'm completely new to this VS deployment deal, so I'm trying to learn the right terms to ask the right questions... Thanks for any help edit: As an aside, when I searched google 5 minutes later, this entry came up as the first result. Would have been awesome if there was an answer for me then :P

    Read the article

  • Docs for auto-generated methods in Ruby on Rails

    - by macek
    Rails has all sorts of auto-generated methods that I've often times struggled to find documentation for. For example, in routes.rb, if I have: map.resources :projects do |p| p.resources :tasks end This will get a plethora of auto-generate path and url helpers. Where can I find documentation for how to work with these paths? I generally understand how to work with them, but more explicit docs might help me understand some of the magic that happens behind the scenes. # compare project_path(@project) project_task_path(@project, @task) # to project_path(:id => @project.id) project_task_path(:project_id => @project.id, :id => @task.id) Also, when I change an attribute on a model, @post.foo_changed? will be true. Where can I find documentation for this and all other magical methods that are created like this? If the magic is there, I'd love to take advantage of it. And finally: Is there a complete resource for config.___ statements for environment.rb? I was able to find docs for Configuration#gem but what attributes can I set within the stubs like config.active_record.___, config.action_mailer.___, config.action_controller.___, etc. Again, I'm looking for a complete resource here, not just a settings for the examples I provided. Even if you can only answer one of these questions, please chime in. These things seem to have been hiding from me and it's my goal to get them some more exposure, so I'll be upvoting all links to docs that point me to what I'm looking for. Thanks! ps, If they're not called auto-generated methods, I apologize. Someone can teach me a lesson here, too :) Edit I'm not looking for tutorials here, folks. I have a fair amount of experience with rails; I'm just looking for complete docs. E.g., I understand how routing works, I just want docs where I can read about all of the usage options.

    Read the article

  • WPF - How to stop an ItemsControl psuedo-grid's columns from dancing/jumping around during layout

    - by Drew Noakes
    Several other questions on SO have come to the same conclusion I have -- using an ItemsControl with a DataTemplate for each item constructed to position items such that they resemble a grid is much simpler (especially to format) than using a ListView. The code resembles: <StackPanel Grid.IsSharedSizeScope="True"> <!-- Header --> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" SharedSizeGroup="Column1" /> <ColumnDefinition Width="Auto" SharedSizeGroup="Column2" /> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Text="Column Header 1" /> <TextBlock Grid.Column="1" Text="Column Header 2" /> </Grid> <!-- Items --> <ItemsControl ItemsSource="{Binding Path=Values, Mode=OneWay}"> <ItemsControl.ItemTemplate> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" SharedSizeGroup="Column1" /> <ColumnDefinition Width="Auto" SharedSizeGroup="Column2" /> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Text="{Binding ColumnProperty1}" /> <TextBlock Grid.Column="1" Text="{Binding ColumnProperty2}" /> </Grid> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </StackPanel> The problem I'm seeing is that whenever I swap the object to which the ItemsSource is bound (it's an ObservableCollection that I replace the reference to, rather than clear and re-add), the entire 'grid' dances about for a few seconds. Presumably it is making a few layout passes to get all the Auto-width columns to match up. This is very distracting for my users and I'd like to get it sorted out. Has anyone else seen this?

    Read the article

  • Java Random Slowdowns on Mac OS cont'd

    - by javajustice
    I asked this question a few weeks ago, but I'm still having the problem and I have some new hints. The original question is here: http://stackoverflow.com/questions/1651887/java-random-slowdowns-on-mac-os Basically, I have a java application that splits a job into independent pieces and runs them in separate threads. The threads have no synchronization or shared memory items. The only resources they do share are data files on the hard disk, with each thread having an open file channel. Most of the time it runs very fast, but occasionally it will run very slow for no apparent reason. If I attach a CPU profiler to it, then it will start running quickly again. If I take a CPU snapshot, it says its spending most of its time in "self time" in a function that doesn't do anything except check a few (unshared unsynchronized) booleans. I don't know how this could be accurate because 1, it makes no sense, and 2, attaching the profiler seems to knock the threads out of whatever mode they're in and fix the problem. Also, regardless of whether it runs fast or slow, it always finishes and gives the same output, and it never dips in total cpu usage (in this case ~1500%), implying that the threads aren't getting blocked. I have tried different garbage collectors, different sizings the parts of the memory space, writing data output to non-raid drives, and putting all data output in threads separate the main worker threads. Does anyone have any idea what kind of problem this could be? Could it be the operating system (OS X 10.6.2) ? I have not been able to duplicate it on a windows machine, but I don't have one with a similar hardware configuration.

    Read the article

  • Sell me Distributed revision control

    - by ring bearer
    I know 1000s of similar topics floating around. I read at lest 5 threads here in SO But why am I still not convinced about DVCS? I have only following questions (note that I am selfishly worried only about Java projects) What is the advantage or value of committing locally? What? really? All modern IDEs allows you to keep track of your changes? and if required you can restore a particular change. Also, they have a feature to label your changes/versions at IDE level!? what if I crash my hard drive? where did my local repository go? (so how is it cool compared to checking in to a central repo?) Working offline or in an air plane. What is the big deal?In order for me to build a release with my changes, I must eventually connect to the central repository. Till then it does not matter how I track my changes locally. Ok Linus Torvalds gives his life to Git and hates everything else. Is that enough to blindly sing praises? Linus lives in a different world compared to offshore developers in my mid-sized project? Pitch me!

    Read the article

  • Objective-C: Getting the True Class of Classes in Class Clusters

    - by TechZen
    Recently while trying to answer a questions here, I ran some test code to see how Xcode/gdb reported the class of instances in class clusters. (see below) In the past, I've expected to see something like: PrivateClusterClass:PublicSuperClass:NSObject Such as this (which still returns as expected): NSPathStore2:NSString:NSObject ... for a string created with +[NSString pathWithComponents:]. However, with NSSet and subclass the following code: - (void)applicationDidFinishLaunching:(UIApplication *)application { NSSet *s=[NSSet setWithObject:@"setWithObject"]; NSMutableSet *m=[NSMutableSet setWithCapacity:1]; [m addObject:@"Added String"]; NSMutableSet *n = [[NSMutableSet alloc] initWithCapacity:1]; [self showSuperClasses:s]; [self showSuperClasses:m]; [self showSuperClasses:n]; [self showSuperClasses:@"Steve"]; } - (void) showSuperClasses:(id) anObject{ Class cl = [anObject class]; NSString *classDescription = [cl description]; while ([cl superclass]) { cl = [cl superclass]; classDescription = [classDescription stringByAppendingFormat:@":%@", [cl description]]; } NSLog(@"%@ classes=%@",[anObject class], classDescription); } ... outputs: // NSSet *s NSCFSet classes=NSCFSet:NSMutableSet:NSSet:NSObject //NSMutableSet *m NSCFSet classes=NSCFSet:NSMutableSet:NSSet:NSObject //NSMutableSet *n NSCFSet classes=NSCFSet:NSMutableSet:NSSet:NSObject // NSString @"Steve" NSCFString classes=NSCFString:NSMutableString:NSString:NSObject The debugger shows the same class for all Set instances. I know that in the past the Set class cluster did not return like this. What has changed? (I suspect it is a change in the bridge from Core Foundation.) What class cluster report just a generic class e.g. NSCFSet and which report an actual subclass e.g. NSPathStore2? Most importantly, when debugging how do you determine the actual class of a NSSet cluster instance?

    Read the article

  • Java/XML: Good "Stream-based" Alternative to JAXB?

    - by Jan
    Hello Experts, JAXB makes working with XML so much easier, but I have currently a big problem, that the documents I have to process are too large for an in memory unmarshalling that JAXB does. The data can be up to 4GB per document. The datastructure I will have to process is very simple and flat: With a root element and millions of “elements”… <root> <element> <sub>foo</sub> </element> <element> <sub>foo</sub> </element> </root> May questions are: Does JAXB maybe somehow support unmarshalling in a “streambased” way, that does not require to build the whole objecttree in memory but rather gives me some kind of “Iterator” to the elements, element by element? (Maybe I just missed that somehow…) If not what are your proposals for an good alternative with a a. “flat learningcurve, ideally very similar to JAXB b. AND VERY IMPORTANT: Ideally with the possibility / tool for the generation of the unarshaller code from an XSD file OR annotated Java Class 3.(I have searched SO and those to library that ended up on my “watchlist” (without comparing them closer) were Apache XML Beans and Xstream… What other libraries are maybe even better for the purpose and what are the disadvantages, adavangaes… Thank you very much!!! Jan

    Read the article

  • ASP.NET AsyncPostBackTrigger disables button's OnClick function???

    - by hahuang65
    I saw another post like this: http://stackoverflow.com/questions/1795621/asyncpostbacktrigger-disables-buttons But I don't really know what to make of it. The accepted answer was poorly typed. Basically, I have a button with an OnClick function. I also have a UpdatePanel, with is AsyncPostBackTrigger set to that same button. It seems that if I do this, my Button no longer does ANYTHING. I guess it's because I have a Page_Load() event... Can anyone explain why this is? And how should I set up my webpage if I can't use the Page_Load() function? Oh and if I put my Button in a UpdatePanel, it also won't do anything, probably because of the same reason. This is a section of my code: <asp:FileUpload id="FileUploadControl" runat="server" /> <asp:Button runat="server" id="UploadButton" text="Upload" OnClick="uploadClicked" /> <br /><br /> <asp:Label runat="server" id="StatusLabel" text="Upload status: " /> <br /> <asp:UpdatePanel ID="UpdatePanel2" runat="server" updatemode="Conditional" > <ContentTemplate> <asp:DropDownList ID="songList" runat="server" /> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="RefreshButton" /> </Triggers> </asp:UpdatePanel> Thanks in advance.

    Read the article

  • Many to many table design question

    - by user169867
    Originally I had 2 tables in my DB, [Property] and [Employee]. Each employee can have 1 "Home Property" so the employee table has a HomePropertyID FK field to Property. Later I needed to model the situation where despite having only 1 "Home Property" the employee did work at or cover for multiple properties. So I created an [Employee2Property] table that has EmployeeID and PropertyID FK fields to model this many 2 many relationship. Now I find that I need to create other many-to-many relationships between employees and properties. For example if there are multiple employees that are managers for a property or multiple employees that perform maintenance work at a property, etc. My questions are: 1) Should I create seperate many-to-many tables for each of these situations or should I just create 1 more table like [PropertyAssociatonType] that lists the types of associations an emploee can have with a property and just add a FK field to [Employee2Property] such a PropertyAssociationTypeID that explains what the association is? I'm curious about the pros/cons or if there's another better way. 2) Am I stupid and going about this all worng? Thanks for any suggestions :)

    Read the article

  • C# Hiding, overriding and calling function from base class.

    - by Lukasz Lysik
    I'm learning C# and I encountered the following problem. I have two classes: base and derived: class MyBase { public void MyMethod() { Console.WriteLine("MyBase::MyMethod()"); } } class MyDerived: MyBase { public void MyMethod() { Console.WriteLine("MyDerived::MyMethod()"); } } For now, without virtual and override key words. When I compile this I get the warning (which is of course expected) that I try to hide MyMethod from MyBase class. What I want to do is to call the method from the base class having an instance of derived class. I do this like this: MyDerived myDerived = new MyDerived(); ((MyBase)myDerived).MyMethod(); It works fine when I do not specify any virtual, etc. keywords in the methods. I tried to put combination of the keywords and I got the following results: | MyBase::MyMethod | MyDerived::MyMethod | Result printed on the console | | -----------------|---------------------|-------------------------------| | - | - | MyBase::MyMethod() | | - | new | MyBase::MyMethod() | | virtual | new | MyBase::MyMethod() | | virtual | override | MyDerived::MyMethod() | I hope the table is clear to you. I have two questions: Is it the correct way to call the function from the base class (((MyBase)myDerived).MyMethod();)? I know about base keyword, but it can be called only from the inside of the derived class. Is it right? Why in the last case (with virtual and override modifiers) the method which was called came from the derived class? Would you please explain that?

    Read the article

  • Template compiling errors on iPhone SDK 3.2

    - by Didier Malenfant
    I'm porting over some templated code from Windows and I'm hitting some compiler differences on the iPhone 3.2 SDK. Original code inside a class template's member function is: return BinarySearch<uint32, CSimpleKey<T> >(key); where BinarySearch is a method inherited from another template. This produces the following error: csimplekeytable.h:131: error: no matching function for call to 'BinarySearch(NEngine::uint32&)' The visual studio compiler seems to walk up the template hierarchy fine but gcc needs me to fully qualify where the function comes from (I have verified this by fixing the same issues with template member variables that way). So I now need to change this into: return CSimpleTable<CSimpleKey<T> >::BinarySearch<uint32, CSimpleKey<T> >(key); Which now produces the following error: csimplekeytable.h:132: error: expected primary-expression before ',' token csimplekeytable.h:132: error: expected primary-expression before '>' token After some head scratching, I believe what's going on here is that it's trying to resolve the '<' before BinarySearch as a 'Less Than' operator for some reason. So two questions: - Am I on the right path with my interpretation of the error? - How do I fix it? -D

    Read the article

  • Encapsulate update method inside of object or have method which accepts an object to update

    - by Tom
    Hi, I actually have 2 questions related to each other: I have an object (class) called, say MyClass which holds data from my database. Currently I have a list of these objects ( List < MyClass ) that resides in a singleton in a "communal area". I feel it's easier to manage the data this way and I fail to see how passing a class around from object to object is beneficial over a singleton (I would be happy if someone can tell me why). Anyway, the data may change in the database from outside my program and so I have to update the data every so often. To update the list of the MyClass I have a method called say, Update, written in another class which accepts a list of MyClass. This updates all the instances of MyClass in the list. However would it be better instead to encapulate the Update() method inside the MyClass object, so instead I would say foreach(MyClass obj in MyClassList) { obj.update(); } What is a better implementation and why? The update method requires a XML reader. I have written an XML reader class which is basically a wrapper over the standard XML reader the language natively provides which provides application specific data collection. Should the XML reader class be in anyway in the "inheritance path" of the MyClass object - the MyClass objects inherits from the XML reader because it uses a few methods. I can't see why it should. I don't like the idea of declaring an instance of the XML Reader class inside of MyClass and an MyClass object is meant to be a simple "record" from the database and I feel giving it loads of methods, other object instances is a bit messy. Perhaps my XML reader class should be static but C#'s native XMLReader isn't static.? Any comments would be greatly appreciated Thanks Thomas

    Read the article

  • How to deploy a number of disparate project types?

    - by niteice
    This question is similar to http://stackoverflow.com/questions/1900269/whats-the-best-way-to-deploy-an-executable-process-on-a-web-server. The situation is this: I'm developing a product that needs to be deployed to a web server. It consists of 4 website projects, a background service, a couple of command-line tools, and two assemblies shared by all of these components. Now, I also happen to administer the server that this product will be deployed on. So I'm familiar with everything that may need to be done to perform an update: Copy website files Replace the service binary Install updated components in the GAC Configure IIS Update database schema After some research it seems that, to reduce deployment time and to be able to let the other sysadmins handle deployment, I want to deploy all of these as an MSI, except that I don't know a thing about installers. I know VS can generate web deployment projects, but where do I go from there? Being able to simply click Next a few times on an installer is my goal for deploying updates. It would also be nice to modularize it, so for example, I could distribute the four websites among multiple servers and have everything appear as individual components in the installer, and as one entity in Add/Remove Programs. Is all of this too much to ask in a single package?

    Read the article

  • Why can't pass Marshaled interface as integer(or pointer)

    - by cemick
    I passed ref of interface from Visio Add-ins to MyCOMServer (http://stackoverflow.com/questions/2455183/interface-marshalling-in-delphi).I have to pass interface as pointer in internals method of MyCOMServer. I try to pass interface to internal method as pointer of interface, but after back cast when i try call method of interface I get exception. Simple example(Fisrt block execute without error, but At Second block I get Exception after addressed to property of IVApplication interface): procedure TMyCOMServer.test(const Interface_:IDispatch); stdcall; var IMy:_IMyInterface; V: Variant; Str: String; I: integer; Vis: IVApplication; begin ...... Self.QuaryInterface(_IMyInterface,IMy); str := IA.ApplicationName; V := Integer(IMy); i := V; Pointer(IMy) := Pointer(i); str := IMy.SomeProperty; // normal completion str := (Interface_ as IVApplication).Path; V := Interface_; I := V; Pointer(Vis) := Pointer(i); str := Vis.Path; // 'access violation at 0x76358e29: read of address 0xfeeefeee' end; Why I can't do like this?

    Read the article

  • User Management: Managing users in user-defined "groups", database schema and logistics

    - by Kevin Brown
    I'm a noob, development wise and logistically-wise. I'm developing a site that lets people take a test... My client wants the ability for a user with the roll/privledge "admin" (a step below a super-admin) to be allowed to create users and only see/edit the users that they create... The users created in that "category" or group need some information that their superior provides. For example, I log in as a "manager", I have the ability to invite people to take the test, and manage those people. Before adding those people, I will have filled out a short survey about myself... Right now, the users that are invited will be asked some of the same questions as the manager. I'd like to cut down the redundancy by using the information put into the database by the manager and apply it to the invited users. How do I set up my database to work with this criterion? I'm a little confused about how to do this! Let me know if I can add more details... (This is a mysql and php app)

    Read the article

  • Is my objective possible using WCF (and is it the right way to do things?)

    - by David
    I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server. However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries. My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me. I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client. Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets. So my questions are: Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)? If sending the object-graph down is the right way, is WCF the right tool? And if WCF is right, what's the best way to get WCF to serialise my object graph?

    Read the article

  • Return value from Object match

    - by Hito_kun
    I'm, by no means, JS fluent, so forgive me if im asking for some really basic stuff, but I've not being able to find a proper answer to my question. Im writting my first Node.js (plus Extra Framework and Socket.io) app and Im having some fun setting up the server side of a FB-like messenger (surprise!!!). So, let's say I have this data structure to store online users(This is a JSON Array, but I'm not sure it is the best way to do it or should I go with Javascript Objects): [ { "site": 45, "users": [ { "idUser": 5, "idSocket": "qwe87r7w8qwe", "name": "Carlos Ray Norris" }, { "idUser": 6, "idSocket": "v8d9d0fgfs7d", "name": "John Connor" } ] }, { "site": 48, "users": [ { "idUser": 22, "idSocket": "qwe87r7w8qwe", "name": "David Bowie" }, { "idUser": 23, "idSocket": "v8d9d0fgfs7d", "name": "Barack H. Obama" } ] } ] What I want to do is to search in the array for x value given y. In this case, retrieving the idSocket knowing the idUser WITHOUT having to run through the array values. So I have basically 2 questions: first, what would be the proper way to store users online? and secondly, how to find values matching with the values I already know (find the idSocket that has a given idUser). I would like a pure JS approach(or using some of the tools given by Node, Socket.io or Express), but if that's not possible then I can look for some JQuery.

    Read the article

  • Entity Framework in layered architecture

    - by Kamyar
    I am using a layered architecture with the Entity Framework. Here's What I came up with till now (All the projects Except UI are class library): Entities: The POCO Entities. Completely persistence ignorant. No Reference to other projects. Generated by Microsoft's ADO.Net POCO Entity Generator. DAL: The EDMX (Entity Model) file with the context class. (t4 generated). References: Entities BLL: Business Logic Layer. Will implement repository pattern on this layer. References: Entities, DAL. This is where the objectcontext gets populated: var ctx=new DAL.MyDBEntities(); UI: The presentation layer: ASP.NET website. References: Entities, BLL + a connection string entry to entities in the config file (question #2). Now my three questions: Is my layer discintion approach correct? In my UI, I access BLL as follows: var customerRep = new BLL.CustomerRepository(); var Customer = customerRep.GetByID(myCustomerID); The problem is that I have to define the entities connection string in my UI's web.config/app.config otherwise I get a runtime exception. IS defining the entities connectionstring in UI spoils the layers' distinction? Or is it accesptible in a muli layered architecture. Should I take any additional steps to perform chage tracking, lazy loading, etc (by etc I mean the features that Entity Framework covers in a conventional, 1 project, non POCO code generation)? Thanks and apologies for the lengthy question.

    Read the article

  • How do I get rid of these warnings?

    - by Brian Postow
    This is really several questions, but anyway... I'm working with a big project in XCode, relatively recently ported from MetroWorks (Yes, really) and there's a bunch of warnings that I want to get rid of. Every so often an IMPORTANT warning comes up, but I never look at them because there's too many garbage ones. So, if I can either figure out how to get XCode to stop giving the warning, or actually fix the problem, that would be great. Here are the warnings: It claims that <map.h> is antiquated. However, when I replace it with <map> my files don't compile. Evidently, there's something in map.h that isn't in map... this decimal constant is unsigned only in ISO C90 This is a large number being compared to an unsigned long. I have even cast it, with no effect. enumeral mismatch in conditional expression: <anonymous enum> vs <anonymous enum> This appears to be from a ?: operator. Possibly that the then and else branches don't evaluate to the same type? Except that in at least one case, it's (matchVp == NULL ? noErr : dupFNErr) And since those are both of type OSErr, which is mac defined... I'm not sure what's up. It also seems to come up when I have other pairs of mac constants... multi-character character constant This one is obvious. The problem is that I actually NEED multi-character constants... -fwritable-strings not compatible with literal CF/NSString I unchecked the "Strings are Read-Only" box in both the project and target settings... and it seems to have had no effect...

    Read the article

  • One big executable or many small DLL's?

    - by Patrick
    Over the years my application has grown from 1MB to 25MB and I expect it to grow further to 40, 50 MB. I don't use DLL's, but put everything in this one big executable. Having one big executable has certain advantages: Installing my application at the customer is really: copy and run. Upgrades can be easily zipped and sent to the customer There is no risk of having conflicting DLL's (where the customer has version X of the EXE, but version Y of the DLL) The big disadvantage of the big EXE is that linking times seem to grow exponentially. Additional problem is that a part of the code (let's say about 40%) is shared with another application. Again, the advantages are that: There is no risk on having a mix of incorrect DLL versions Every developer can make changes on the common code which speeds up developments. But again, this has a serious impact on compilation times (everyone compiles the common code again on his PC) and on linking times. The question http://stackoverflow.com/questions/2387908/grouping-dlls-for-use-in-executable mentions the possibility of mixing DLL's in one executable, but it looks like this still requires you to link all functions manually in your application (using LoadLibrary, GetProcAddress, ...). What is your opinion on executable sizes, the use of DLL's and the best 'balance' between easy deployment and easy/fast development?

    Read the article

< Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >