Search Results

Search found 5128 results on 206 pages for 'member hiding'.

Page 43/206 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Mimic.js handle fault response

    - by nikolas
    i use mimic.js regarding a project that i m developing.. the issue that i face, is if there is a fault response from the webservice, mimic, doesn't handle it, and the browser remains "awaiting" for a response, that has actually been back, but hasn't been handled by mimic.. to be more specific, one typical fault response is the following.. <?xml version="1.0" encoding="UTF-8"?> <methodResponse> <fault> <value> <struct> <member> <name>faultCode</name><value><int>104</int></value> </member> <member> <name>faultString</name><value><string>Invalid Input Parameters</string></value> </member> </struct></value></fault></methodResponse> and chrome console get me the error mimic.js:11 Uncaught TypeError: Cannot read property 'childNodes' of null any suggestions on how to handle "fault" responses? mimic.js hasn't been altered at all.. also tried to bypass the fact that mimic can't handle the fault, by trying to use the isFault flag, in the if statement, with no success either.. isFault is supposed to get a boolean value, i guess true/false?

    Read the article

  • stdexcept On Android

    - by David R.
    I'm trying to compile SoundTouch on Android. I started with this configure line: ./configure CPPFLAGS="-I/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/include/" LDFLAGS="-Wl,-rpath-link=/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/lib -L/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/lib -nostdlib -lc" --host=arm-eabi --enable-shared=yes CFLAGS="-nostdlib -O3 -mandroid" host_alias=arm-eabi --no-create --no-recursion Because the Android NDK targets ARM, I also had to change the Makefile to remove the -msse2 flags to progress. When I run 'make', I get: /bin/sh ../../libtool --tag=CXX --mode=compile arm-eabi-g++ -DHAVE_CONFIG_H -I. -I../../include -I../../include -I/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/include/ -O3 -fcheck-new -I../../include -g -O2 -MT FIRFilter.lo -MD -MP -MF .deps/FIRFilter.Tpo -c -o FIRFilter.lo FIRFilter.cpp libtool: compile: arm-eabi-g++ -DHAVE_CONFIG_H -I. -I../../include -I../../include -I/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/include/ -O3 -fcheck-new -I../../include -g -O2 -MT FIRFilter.lo -MD -MP -MF .deps/FIRFilter.Tpo -c FIRFilter.cpp -o FIRFilter.o FIRFilter.cpp:46:21: error: stdexcept: No such file or directory FIRFilter.cpp: In member function 'virtual void soundtouch::FIRFilter::setCoefficients(const soundtouch::SAMPLETYPE*, uint, uint)': FIRFilter.cpp:177: error: 'runtime_error' is not a member of 'std' FIRFilter.cpp: In static member function 'static void* soundtouch::FIRFilter::operator new(size_t)': FIRFilter.cpp:225: error: 'runtime_error' is not a member of 'std' make[2]: *** [FIRFilter.lo] Error 1 make[1]: *** [all-recursive] Error 1 make: *** [all-recursive] Error 1 This isn't very surprising, since the -nostdlib flag was required. Android seems to have neither stdexcept nor stdlib. How can I get past this block of compiling SoundTouch? At a guess, there may be some flag I don't know about that I should use. I could refactor the code not to use stdexcept. There may be a way to pull in the original stdexcept source and reference that. I might be able to link to a precompiled stdexcept library.

    Read the article

  • Accessing type members outside the class in Scala

    - by Pekka Mattila
    Hi, I am trying to understand type members in Scala. I wrote a simple example that tries to explain my question. First, I created two classes for types: class BaseclassForTypes class OwnType extends BaseclassForTypes Then, I defined an abstract type member in trait and then defined the type member in a concerete class: trait ScalaTypesTest { type T <: BaseclassForTypes def returnType: T } class ScalaTypesTestImpl extends ScalaTypesTest { type T = OwnType override def returnType: T = { new T } } Then, I want to access the type member (yes, the type is not needed here, but this explains my question). Both examples work. Solution 1. Declaring the type, but the problem here is that it does not use the type member and the type information is duplicated (caller and callee). val typeTest = new ScalaTypesTestImpl val typeObject:OwnType = typeTest.returnType // declare the type second time here true must beTrue Solution 2. Initializing the class and using the type through the object. I don't like this, since the class needs to be initialized val typeTest = new ScalaTypesTestImpl val typeObject:typeTest.T = typeTest.returnType // through an instance true must beTrue So, is there a better way of doing this or are type members meant to be used only with the internal implementation of a class?

    Read the article

  • Java XMLRPC request-String

    - by Philip
    Hi, I'm using Apache XML-RPC 3.1.2 to talk to an online-service. They have something special, they need a hash over the whole XML with a secret key for some kind of security, like this: String hash = md5(xmlRequest + secretKey); String requestURL = "http://foo.bar/?authHash=" + hash; So I need the XML-request like this: <?xml version="1.0"?> <methodCall> <methodName>foo.bar</methodName> <params> <param> <value><struct> <member><name>bla</name> <value><int>1</int></value> </member> <member><name>blubb</name> <value><int>2</int></value> </member> </struct></value> </param> </params> </methodCall> But how do I get this String-representation of the XMLRPC-Request with the lib Apache XML-RPC?

    Read the article

  • Fix an external dependency of a ruby gem

    - by Patrick Daryll Glandien
    I am currently trying to install the gem nfoiled, which provides a ruby interface to ncurses. I do this by using gem install elliottcable-nfoiled as suggest in the README. Downloading it manually from the github repository and then installing it with rake install doesn't work because of a problem with the echoe-gem, thus I am bound to use the normal way. Unfortunately it depends on the gem ncurses-0.9.1 which is only compatible with ruby 1.8, and thus I can't install nfoiled either (since it always tries to compile ncurses-0.9.1 first): novavortex:/usr/src# gem install elliottcable-nfoiled Building native extensions. This could take a while... ... form_wrap.c: In function `rbncurs_m_new_form': form_wrap.c:395: error: `struct RArray' has no member named `len' form_wrap.c: In function `rbncurs_c_set_field_type': form_wrap.c:619: error: `struct RArray' has no member named `len' form_wrap.c: In function `rbncurs_c_set_form_fields': form_wrap.c:778: error: `struct RArray' has no member named `len' form_wrap.c: In function `make_arg': form_wrap.c:1126: error: `struct RArray' has no member named `len' make: *** [form_wrap.o] Error 1 Gem files will remain installed in /usr/local/lib/ruby/gems/1.9.1/gems/ncurses-0.9.1 for inspection. Results logged to /usr/local/lib/ruby/gems/1.9.1/gems/ncurses-0.9.1/gem_make.out novavortex:/usr/src# I managed to fix the problem in ncurses-0.9.1 (by replacing RARRAY(x)-len with RARRAY_LEN(x)) and to install it, but nfoiled still always tries to recompile it from a freshly downloaded source. How can I install nfoiled without having it recompile ncurses first?

    Read the article

  • class header+ implementation

    - by igor
    what am I doing wrong here? I keep on getting a compilation error when I try to run this in codelab (turings craft) Instructions: Write the implementation (.cpp file) of the GasTank class of the previous exercise. The full specification of the class is: A data member named amount of type double. A constructor that no parameters. The constructor initializes the data member amount to 0. A function named addGas that accepts a parameter of type double . The value of the amount instance variable is increased by the value of the parameter. A function named useGas that accepts a parameter of type double . The value of the amount data member is decreased by the value of the parameter. A function named getGasLevel that accepts no parameters. getGasLevel returns the value of the amount data member. class GasTank{ double amount; GasTank(); void addGas(double); void useGas(double); double getGasLevel();}; GasTank::GasTank(){ amount=0;} double GasTank::addGas(double a){ amount+=a;} double GasTank::useGas(double a){ amount+=a;} double GasTank::getGasLevel(){ return amount;}

    Read the article

  • Wpf binding with nested properties

    - by byte
    ViewModel I have a property of type Member called KeyMember. The 'Member' type has an ObservableCollection called Addresses. The Address is composed of two strings - street and postcode . View I have a ListBox whose item source need to be set to ViewModels's KeyMember property and it should display the Street of all the Past Addresses in the collection. Question My ViewModel and View relationship is established properly. I am able to write a data template for the above simple case as below <ListBox ItemsSource="{Binding KeyMember.Addresses}"> <ListBox.ItemTemplate> <DataTemplate DataType="Address"> <TextBlock Text="{Binding Street}"/> </DataTemplate> </ListBox.ItemTemplate> </ListBox> How would I write the DataTemplate if I change KeyMember from type Member to ObservableCollection< Member assuming that the collection has only one element. PS: I know that for multiple elements in collection, I will have to implement the Master-Detail pattern/scenario.

    Read the article

  • structure inside structure - c++ Error

    - by gamadeus
    First of all the error I am getting is of the type: Request for member 's' of struct1.struct1::struct2, which is of non class type '__u32' where: struct struct1 { struct x struct2; struct x struct3; struct x struct4; }; The usage is of the form: struct struct1 st1; st1.struct2.s = Value; Now my struct1 is: struct ip_mreq_source { struct in_addr imr_multiaddr; struct in_addr imr_sourceaddr; struct in_addr imr_interface; }; struct 'x' is in_addr Where: typedef uint32_t in_addr_t; struct in_addr { in_addr_t s_addr; }; element 's' is the element s_addr in in_addr. My detailed error coming out of g++ (GCC 4.4.3) from the Android based compiler: arm-linux-androideabi-g++ -MMD -MP -MF groupsock/GroupsockHelper.o.d.org -fpic -ffunction-sections -funwind-tables -fstack-protector -D__ARM_ARCH_5__ -D__ARM_ARCH_5T__ -D__ARM_ARCH_5E__ -D__ARM_ARCH_5TE__ -Wno-psabi -march=armv5te -mtune=xscale -msoft-float -fno-exceptions -fno-rtti -mthumb -Os -fomit-frame-pointer -fno-strict-aliasing -finline- limit=64 -Igroupsock/include -Igroupsock/../UsageEnvironment/include -Iandroid- ndk-r5b/sources/cxx-stl/system/include -Igroupsock -DANDROID -Wa,--noexecstack -DANDROID_NDK -Wall -fexceptions -O2 -DNDEBUG -g -Iandroid-8/arch-arm/usr/include -c groupsock/GroupsockHelper.cpp -o groupsock/GroupsockHelper.o && rm -f groupsock/GroupsockHelper.o.d && mv groupsock/GroupsockHelper.o.d.org groupsock/GroupsockHelper.o.d groupsock/GroupsockHelper.cpp: In function 'Boolean socketJoinGroupSSM(UsageEnvironment&, int, netAddressBits, netAddressBits)': groupsock/GroupsockHelper.cpp:427: error: request for member 's_addr' in 'imr.ip_mreq_source::imr_multiaddr', which is of non-class type '__u32' groupsock/GroupsockHelper.cpp:428: error: request for member 's_addr' in 'imr.ip_mreq_source::imr_sourceaddr', which is of non-class type '__u32' groupsock/GroupsockHelper.cpp:429: error: request for member 's_addr' in 'imr.ip_mreq_source::imr_interface', which is of non-class type '__u32' I am not sure what is causing the error. Any pointers would be great - no pun intended. Thanks

    Read the article

  • How can I declare and initialize an array of pointers to a structure in C?

    - by worlds-apart89
    I have a small assignment in C. I am trying to create an array of pointers to a structure. My question is how can I initialize each pointer to NULL? Also, after I allocate memory for a member of the array, I can not assign values to the structure to which the array element points. #include <stdio.h> #include <stdlib.h> typedef struct list_node list_node_t; struct list_node { char *key; int value; list_node_t *next; }; int main() { list_node_t *ptr = (list_node_t*) malloc(sizeof(list_node_t)); ptr->key = "Hello There"; ptr->value = 1; ptr->next = NULL; // Above works fine // Below is erroneous list_node_t **array[10] = {NULL}; *array[0] = (list_node_t*) malloc(sizeof(list_node_t)); array[0]->key = "Hello world!"; //request for member ‘key’ in something not a structure or union array[0]->value = 22; //request for member ‘value’ in something not a structure or union array[0]->next = NULL; //request for member ‘next’ in something not a structure or union // Do something with the data at hand // Deallocate memory using function free return 0; }

    Read the article

  • Paste or Drop, copy data and release source?

    - by Harvey
    I have an MFC DocView SDI App that receives data from either the clipboard or drag and drop. The data is in either CF_HDROP or CF_TEXT format. I have a COleDropTarget derived CMyDropTarget member m_dropTarget of my CMainFrame class. I have two member functions of CMyDropTarget; OnDrop(...) and OnPaste() which each call another member function PostData(pDataObject). I want to get a copy of the pDataObject from either CF_... format and PostMessage to my CmainFrame which will call a member of my Doc class. What is a simple way of getting a copy of the global data to pass with the PostMessage() so that I can get the drop source released before I get around to processing the global data? NOTE that they are always treated as a copy of the source data, so there is no need for the source to delete anything when the operation is done. Or perhaps a better way of asking the question is: How can I release the Drop source before processing the global data? Can I pass the HGLOBAL via PostMessage and still release the source without making a copy of it?

    Read the article

  • Optimizing an embedded SELECT query in mySQL

    - by Crazy Serb
    Ok, here's a query that I am running right now on a table that has 45,000 records and is 65MB in size... and is just about to get bigger and bigger (so I gotta think of the future performance as well here): SELECT count(payment_id) as signup_count, sum(amount) as signup_amount FROM payments p WHERE tm_completed BETWEEN '2009-05-01' AND '2009-05-30' AND completed > 0 AND tm_completed IS NOT NULL AND member_id NOT IN (SELECT p2.member_id FROM payments p2 WHERE p2.completed=1 AND p2.tm_completed < '2009-05-01' AND p2.tm_completed IS NOT NULL GROUP BY p2.member_id) And as you might or might not imagine - it chokes the mysql server to a standstill... What it does is - it simply pulls the number of new users who signed up, have at least one "completed" payment, tm_completed is not empty (as it is only populated for completed payments), and (the embedded Select) that member has never had a "completed" payment before - meaning he's a new member (just because the system does rebills and whatnot, and this is the only way to sort of differentiate between an existing member who just got rebilled and a new member who got billed for the first time). Now, is there any possible way to optimize this query to use less resources or something, and to stop taking my mysql resources down on their knees...? Am I missing any info to clarify this any further? Let me know... EDIT: Here are the indexes already on that table: PRIMARY PRIMARY 46757 payment_id member_id INDEX 23378 member_id payer_id INDEX 11689 payer_id coupon_id INDEX 1 coupon_id tm_added INDEX 46757 tm_added, product_id tm_completed INDEX 46757 tm_completed, product_id

    Read the article

  • How to structure this Symfony web project?

    - by James William
    I am new to Symfony and am not sure how to best structure my web project. The solution must accommodate 3 use cases: Public access to www.mydomain.com for general use Member only access to member.mydomain.com Administrator access to admin.mydomain.com All three virtual hosts point to the Symfony /web directory Questions: Is this 3 separate applications in my Symfony project (e.g. "frontend", "backend" and "admin" or "public", "member", "admin")? Is this a good approach if there is to be some duplicate code (e.g. generating a member list would be common across all 3 applications, but presented differently)? How would I route to the various applications based on the subdomain when a user accesses *.mydomain.com? Where in Symfony should this routing logic be placed? Or, is this one application with modules for each of the above use cases? EDIT: I do not have access to httpd.conf in apache to specify a default page for virtual hosts. I can only specify a directory for each subdomain using the hostin provider's cPanel.

    Read the article

  • What does an object look like in memory?

    - by NeilMonday
    This is probably a really dumb question, but I will ask anyway. I am curious what an object looks like in memory. Obviously it would have to have all of its member data in it. I assume that functions for an object would not be duplicated in memory (or maybe I am wrong?). It would seem wasteful to have 999 objects in memory all with the same function defined over and over. If there is only 1 function in memory for all 999 objects, then how does each function know who's member data to modify (I specifically want to know at the low level). Is there an object pointer that gets sent to the function behind the scenes? Perhaps it is different for every compiler? Also, how does the static keyword affect this? With static member data, I would think that all 999 objects would use the exact same memory location for their static member data. Where does this get stored? Static functions I guess would also just be one place in memory, and would not have to interact with instantiated objects, which I think I understand.

    Read the article

  • [Rails3] How to do multiple many to many relationships between the same two tables.

    - by Kurt
    Hi. I have a model of a club where I want to model the two entities Meeting and Member. There are actually two many-to-many relationships between these entities though, as for any meeting a Member can either be a Speaker or a Guest. Now I am an OO thinker, so would normally just create the two classes and each one would just have two arrays of the other inside, but rails is making me think a bit more data centric here, so I realise I need to break these M2M relationships up with join tables Speakers and Guests which I have done, but now I am having trouble describing the relationships in the models. The two join table models both have "belongs_to :meeting" and "belongs_to :member" and I think that should be sufficient. I am not however sure about the Meeting and Member models though. Each one has "has_many :guests" and "has_many: speakers" but I am not sure if I also want to go: has_many :members, :through = :guests has_many :members, :through = :speakers But I suspect that this is like declaring two "members" that will clash. I also thought about: has_many :guests, :through = :guests has_many :speakers, :through = :speakers Does that make sense? How would ActiveRecord know that they are in fact Members? I have found heaps of examples of polymorphic m2m relationships and m2m relationships where 1 table references itself, but no good examples to help me mode this situation where two separate tables have two different m2m relationships. Anyone got any tips?

    Read the article

  • Including typedef of child in parent class

    - by Baz
    I have a class which looks something like this. I'd prefer to have the typedef of ParentMember in the Parent class and rename it Member. How might this be possible? The only way I can see is to have std::vector as a public member instead of using inheritance. typedef std::pair<std::string, boost::any> ParentMember; class Parent: public std::vector<ParentMember> { public: template <typename T> std::vector<T>& getMember(std::string& s) { MemberFinder finder(s); std::vector<ParentMember>::iterator member = std::find_if(begin(), end(), finder); boost::any& container = member->second; return boost::any_cast<std::vector<T>&>(container); } private: class Finder { ... }; };

    Read the article

  • LINQ to SQL and missing Many to Many EntityRefs

    - by Rick Strahl
    Ran into an odd behavior today with a many to many mapping of one of my tables in LINQ to SQL. Many to many mappings aren’t transparent in LINQ to SQL and it maps the link table the same way the SQL schema has it when creating one. In other words LINQ to SQL isn’t smart about many to many mappings and just treats it like the 3 underlying tables that make up the many to many relationship. Iain Galloway has a nice blog entry about Many to Many relationships in LINQ to SQL. I can live with that – it’s not really difficult to deal with this arrangement once mapped, especially when reading data back. Writing is a little more difficult as you do have to insert into two entities for new records, but nothing that can’t be handled in a small business object method with a few lines of code. When I created a database I’ve been using to experiment around with various different OR/Ms recently I found that for some reason LINQ to SQL was completely failing to map even to the linking table. As it turns out there’s a good reason why it fails, can you spot it below? (read on :-}) Here is the original database layout: There’s an items table, a category table and a link table that holds only the foreign keys to the Items and Category tables for a typical M->M relationship. When these three tables are imported into the model the *look* correct – I do get the relationships added (after modifying the entity names to strip the prefix): The relationship looks perfectly fine, both in the designer as well as in the XML document: <Table Name="dbo.wws_Item_Categories" Member="ItemCategories"> <Type Name="ItemCategory"> <Column Name="ItemId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Column Name="CategoryId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Association Name="ItemCategory_Category" Member="Categories" ThisKey="CategoryId" OtherKey="Id" Type="Category" /> <Association Name="Item_ItemCategory" Member="Item" ThisKey="ItemId" OtherKey="Id" Type="Item" IsForeignKey="true" /> </Type> </Table> <Table Name="dbo.wws_Categories" Member="Categories"> <Type Name="Category"> <Column Name="Id" Type="System.Guid" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" /> <Column Name="ParentId" Type="System.Guid" DbType="UniqueIdentifier" CanBeNull="true" /> <Column Name="CategoryName" Type="System.String" DbType="NVarChar(150)" CanBeNull="true" /> <Column Name="CategoryDescription" Type="System.String" DbType="NVarChar(MAX)" CanBeNull="true" /> <Column Name="tstamp" AccessModifier="Internal" Type="System.Data.Linq.Binary" DbType="rowversion" CanBeNull="true" IsVersion="true" /> <Association Name="ItemCategory_Category" Member="ItemCategory" ThisKey="Id" OtherKey="CategoryId" Type="ItemCategory" IsForeignKey="true" /> </Type> </Table> However when looking at the code generated these navigation properties (also on Item) are completely missing: [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.wws_Item_Categories")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class ItemCategory : Westwind.BusinessFramework.EntityBase { private System.Guid _ItemId; private System.Guid _CategoryId; public ItemCategory() { } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ItemId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public System.Guid ItemId { get { return this._ItemId; } set { if ((this._ItemId != value)) { this._ItemId = value; } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_CategoryId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=2)] public System.Guid CategoryId { get { return this._CategoryId; } set { if ((this._CategoryId != value)) { this._CategoryId = value; } } } } Notice that the Item and Category association properties which should be EntityRef properties are completely missing. They’re there in the model, but the generated code – not so much. So what’s the problem here? The problem – it appears – is that LINQ to SQL requires primary keys on all entities it tracks. In order to support tracking – even of the link table entity – the link table requires a primary key. Real obvious ain’t it, especially since the designer happily lets you import the table and even shows the relationship and implicitly the related properties. Adding an Id field as a Pk to the database and then importing results in this model layout: which properly generates the Item and Category properties into the link entity. It’s ironic that LINQ to SQL *requires* the PK in the middle – the Entity Framework requires that a link table have *only* the two foreign key fields in a table in order to recognize a many to many relation. EF actually handles the M->M relation directly without the intermediate link entity unlike LINQ to SQL. [updated from comments – 12/24/2009] Another approach is to set up both ItemId and CategoryId in the database which shows up in LINQ to SQL like this: This also work in creating the Category and Item fields in the ItemCategory entity. Ultimately this is probably the best approach as it also guarantees uniqueness of the keys and so helps in database integrity. It took me a while to figure out WTF was going on here – lulled by the designer to think that the properties should be when they were not. It’s actually a well documented feature of L2S that each entity in the model requires a Pk but of course that’s easy to miss when the model viewer shows it to you and even the underlying XML model shows the Associations properly. This is one of the issue with L2S of course – you have to play by its rules and once you hit one of those rules there’s no way around them – you’re stuck with what it requires which in this case meant changing the database.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ADO.NET  LINQ  

    Read the article

  • The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

    - by Alexandra Georgescu
    After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it. One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals: The Approvals Administrator consisting of the following roles: Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities). Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned. Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities) Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays: After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are: All, which pre-selects all entities to be included for the definitions on the subsequent tabs None, manual entity selections will be made subsequently Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions. Finally a pattern needs to be selected, which will determine the general flow of ownership: Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2 In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path. Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process. Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity. After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members. -------------------------------------------------------------------------------------------------------- TIP: In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.). Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute. -------------------------------------------------------------------------------------------------------- In the next step owners and reviewers are assigned to the PUH. Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the ordermthat you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit. -------------------------------------------------------------------------------------------------------- TIP: Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units. -------------------------------------------------------------------------------------------------------- In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment. Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries. After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals. The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here: Export/Import of PUHs: Out of Office agent: Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)): And various new and helpful reviewer actions with corresponding approval states. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Read the article

  • readonly keyword

    - by nmarun
    This is something new that I learned about the readonly keyword. Have a look at the following class: 1: public class MyClass 2: { 3: public string Name { get; set; } 4: public int Age { get; set; } 5:  6: private readonly double Delta; 7:  8: public MyClass() 9: { 10: Initializer(); 11: } 12:  13: public MyClass(string name = "", int age = 0) 14: { 15: Name = name; 16: Age = age; 17: Initializer(); 18: } 19:  20: private void Initializer() 21: { 22: Delta = 0.2; 23: } 24: } I have a couple of public properties and a private readonly member. There are two constructors – one that doesn’t take any parameters and the other takes two parameters to initialize the public properties. I’m also calling the Initializer method in both constructors to initialize the readonly member. Now when I build this, the code breaks and the Error window says: “A readonly field cannot be assigned to (except in a constructor or a variable initializer)” Two things after I read this message: It’s such a negative statement. I’d prefer something like: “A readonly field can be assigned to (or initialized) only in a constructor or through a variable initializer” But in my defense, I AM assigning it in a constructor (only indirectly). All I’m doing is creating a method that does it and calling it in a constructor. Turns out, .net was not ‘frameworked’ this way. We need to have the member initialized directly in the constructor. If you have multiple constructors, you can just use the ‘this’ keyword on all except the default constructors to call the default constructor. This default constructor can then initialize your readonly members. This will ensure you’re not repeating the code in multiple places. A snippet of what I’m talking can be seen below: 1: public class Person 2: { 3: public int UniqueNumber { get; set; } 4: public string Name { get; set; } 5: public int Age { get; set; } 6: public DateTime DateOfBirth { get; set; } 7: public string InvoiceNumber { get; set; } 8:  9: private readonly string Alpha; 10: private readonly int Beta; 11: private readonly double Delta; 12: private readonly double Gamma; 13:  14: public Person() 15: { 16: Alpha = "FDSA"; 17: Beta = 2; 18: Delta = 3.0; 19: Gamma = 0.0989; 20: } 21:  22: public Person(int uniqueNumber) : this() 23: { 24: UniqueNumber = uniqueNumber; 25: } 26: } See the syntax in line 22 and you’ll know what I’m talking about. So the default constructor gets called before the one in line 22. These are known as constructor initializers and they allow one constructor to call another. The other ‘myth’ I had about readonly members is that you can set it’s value only once. This was busted as well (I recall Adam and Jamie’s show). Say you’ve initialized the readonly member through a variable initializer. You can over-write this value in any of the constructors any number of times. 1: public class Person 2: { 3: public int UniqueNumber { get; set; } 4: public string Name { get; set; } 5: public int Age { get; set; } 6: public DateTime DateOfBirth { get; set; } 7: public string InvoiceNumber { get; set; } 8:  9: private readonly string Alpha = "asdf"; 10: private readonly int Beta = 15; 11: private readonly double Delta = 0.077; 12: private readonly double Gamma = 1.0; 13:  14: public Person() 15: { 16: Alpha = "FDSA"; 17: Beta = 2; 18: Delta = 3.0; 19: Gamma = 0.0989; 20: } 21:  22: public Person(int uniqueNumber) : this() 23: { 24: UniqueNumber = uniqueNumber; 25: Beta = 3; 26: } 27:  28: public Person(string name, DateTime dob) : this() 29: { 30: Name = name; 31: DateOfBirth = dob; 32:  33: Alpha = ";LKJ"; 34: Gamma = 0.0898; 35: } 36:  37: public Person(int uniqueNumber, string name, int age, DateTime dob, string invoiceNumber) : this() 38: { 39: UniqueNumber = uniqueNumber; 40: Name = name; 41: Age = age; 42: DateOfBirth = dob; 43: InvoiceNumber = invoiceNumber; 44:  45: Alpha = "QWER"; 46: Beta = 5; 47: Delta = 1.0; 48: Gamma = 0.0; 49: } 50: } In the above example, every constructor over-writes the values for the readonly members. This is perfectly valid. There is a possibility that based on the way the object is instantiated, the readonly member will have a different value. Well, that’s all I have for today and read this as it’s on a related topic.

    Read the article

  • Security in Software

    The term security has many meanings based on the context and perspective in which it is used. Security from the perspective of software/system development is the continuous process of maintaining confidentiality, integrity, and availability of a system, sub-system, and system data. This definition at a very high level can be restated as the following: Computer security is a continuous process dealing with confidentiality, integrity, and availability on multiple layers of a system. Key Aspects of Software Security Integrity Confidentiality Availability Integrity within a system is the concept of ensuring only authorized users can only manipulate information through authorized methods and procedures. An example of this can be seen in a simple lead management application.  If the business decided to allow each sales member to only update their own leads in the system and sales managers can update all leads in the system then an integrity violation would occur if a sales member attempted to update someone else’s leads. An integrity violation occurs when a team member attempts to update someone else’s lead because it was not entered by the sales member.  This violates the business rule that leads can only be update by the originating sales member. Confidentiality within a system is the concept of preventing unauthorized access to specific information or tools.  In a perfect world the knowledge of the existence of confidential information/tools would be unknown to all those who do not have access. When this this concept is applied within the context of an application only the authorized information/tools will be available. If we look at the sales lead management system again, leads can only be updated by originating sales members. If we look at this rule then we can say that all sales leads are confidential between the system and the sales person who entered the lead in to the system. The other sales team members would not need to know about the leads let alone need to access it. Availability within a system is the concept of authorized users being able to access the system. A real world example can be seen again from the lead management system. If that system was hosted on a web server then IP restriction can be put in place to limit access to the system based on the requesting IP address. If in this example all of the sales members where accessing the system from the 192.168.1.23 IP address then removing access from all other IPs would be need to ensure that improper access to the system is prevented while approved users can access the system from an authorized location. In essence if the requesting user is not coming from an authorized IP address then the system will appear unavailable to them. This is one way of controlling where a system is accessed. Through the years several design principles have been identified as being beneficial when integrating security aspects into a system. These principles in various combinations allow for a system to achieve the previously defined aspects of security based on generic architectural models. Security Design Principles Least Privilege Fail-Safe Defaults Economy of Mechanism Complete Mediation Open Design Separation Privilege Least Common Mechanism Psychological Acceptability Defense in Depth Least Privilege Design PrincipleThe Least Privilege design principle requires a minimalistic approach to granting user access rights to specific information and tools. Additionally, access rights should be time based as to limit resources access bound to the time needed to complete necessary tasks. The implications of granting access beyond this scope will allow for unnecessary access and the potential for data to be updated out of the approved context. The assigning of access rights will limit system damaging attacks from users whether they are intentional or not. This principle attempts to limit data changes and prevents potential damage from occurring by accident or error by reducing the amount of potential interactions with a resource. Fail-Safe Defaults Design PrincipleThe Fail-Safe Defaults design principle pertains to allowing access to resources based on granted access over access exclusion. This principle is a methodology for allowing resources to be accessed only if explicit access is granted to a user. By default users do not have access to any resources until access has been granted. This approach prevents unauthorized users from gaining access to resource until access is given. Economy of Mechanism Design PrincipleThe Economy of mechanism design principle requires that systems should be designed as simple and small as possible. Design and implementation errors result in unauthorized access to resources that would not be noticed during normal use. Complete Mediation Design PrincipleThe Complete Mediation design principle states that every access to every resource must be validated for authorization. Open Design Design PrincipleThe Open Design Design Principle is a concept that the security of a system and its algorithms should not be dependent on secrecy of its design or implementation Separation Privilege Design PrincipleThe separation privilege design principle requires that all resource approved resource access attempts be granted based on more than a single condition. For example a user should be validated for active status and has access to the specific resource. Least Common Mechanism Design PrincipleThe Least Common Mechanism design principle declares that mechanisms used to access resources should not be shared. Psychological Acceptability Design PrincipleThe Psychological Acceptability design principle refers to security mechanisms not make resources more difficult to access than if the security mechanisms were not present Defense in Depth Design PrincipleThe Defense in Depth design principle is a concept of layering resource access authorization verification in a system reduces the chance of a successful attack. This layered approach to resource authorization requires unauthorized users to circumvent each authorization attempt to gain access to a resource. When designing a system that requires meeting a security quality attribute architects need consider the scope of security needs and the minimum required security qualities. Not every system will need to use all of the basic security design principles but will use one or more in combination based on a company’s and architect’s threshold for system security because the existence of security in an application adds an additional layer to the overall system and can affect performance. That is why the definition of minimum security acceptably is need when a system is design because this quality attributes needs to be factored in with the other system quality attributes so that the system in question adheres to all qualities based on the priorities of the qualities. Resources: Barnum, Sean. Gegick, Michael. (2005). Least Privilege. Retrieved on August 28, 2011 from https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/principles/351-BSI.html Saltzer, Jerry. (2011). BASIC PRINCIPLES OF INFORMATION PROTECTION. Retrieved on August 28, 2011 from  http://web.mit.edu/Saltzer/www/publications/protection/Basic.html Barnum, Sean. Gegick, Michael. (2005). Defense in Depth. Retrieved on August 28, 2011 from  https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/principles/347-BSI.html Bertino, Elisa. (2005). Design Principles for Security. Retrieved on August 28, 2011 from  http://homes.cerias.purdue.edu/~bhargav/cs526/security-9.pdf

    Read the article

  • elffile: ELF Specific File Identification Utility

    - by user9154181
    Solaris 11 has a new standard user level command, /usr/bin/elffile. elffile is a variant of the file utility that is focused exclusively on linker related files: ELF objects, archives, and runtime linker configuration files. All other files are simply identified as "non-ELF". The primary advantage of elffile over the existing file utility is in the area of archives — elffile examines the archive members and can produce a summary of the contents, or per-member details. The impetus to add elffile to Solaris came from the effort to extend the format of Solaris archives so that they could grow beyond their previous 32-bit file limits. That work introduced a new archive symbol table format. Now that there was more than one possible format, I thought it would be useful if the file utility could identify which format a given archive is using, leading me to extend the file utility: % cc -c ~/hello.c % ar r foo.a hello.o % file foo.a foo.a: current ar archive, 32-bit symbol table % ar r -S foo.a hello.o % file foo.a foo.a: current ar archive, 64-bit symbol table In turn, this caused me to think about all the things that I would like the file utility to be able to tell me about an archive. In particular, I'd like to be able to know what's inside without having to unpack it. The end result of that train of thought was elffile. Much of the discussion in this article is adapted from the PSARC case I filed for elffile in December 2010: PSARC 2010/432 elffile Why file Is No Good For Archives And Yet Should Not Be Fixed The standard /usr/bin/file utility is not very useful when applied to archives. When identifying an archive, a user typically wants to know 2 things: Is this an archive? Presupposing that the archive contains objects, which is by far the most common use for archives, what platform are the objects for? Are they for sparc or x86? 32 or 64-bit? Some confusing combination from varying platforms? The file utility provides a quick answer to question (1), as it identifies all archives as "current ar archive". It does nothing to answer the more interesting question (2). To answer that question, requires a multi-step process: Extract all archive members Use the file utility on the extracted files, examine the output for each file in turn, and compare the results to generate a suitable summary description. Remove the extracted files It should be easier and more efficient to answer such an obvious question. It would be reasonable to extend the file utility to examine archive contents in place and produce a description. However, there are several reasons why I decided not to do so: The correct design for this feature within the file utility would have file examine each archive member in turn, applying its full abilities to each member. This would be elegant, but also represents a rather dramatic redesign and re-implementation of file. Archives nearly always contain nothing but ELF objects for a single platform, so such generality in the file utility would be of little practical benefit. It is best to avoid adding new options to standard utilities for which other implementations of interest exist. In the case of the file utility, one concern is that we might add an option which later appears in the GNU version of file with a different and incompatible meaning. Indeed, there have been discussions about replacing the Solaris file with the GNU version in the past. This may or may not be desirable, and may or may not ever happen. Either way, I don't want to preclude it. Examining archive members is an O(n) operation, and can be relatively slow with large archives. The file utility is supposed to be a very fast operation. I decided that extending file in this way is overkill, and that an investment in the file utility for better archive support would not be worth the cost. A solution that is more narrowly focused on ELF and other linker related files is really all that we need. The necessary code for doing this already exists within libelf. All that is missing is a small user-level wrapper to make that functionality available at the command line. In that vein, I considered adding an option for this to the elfdump utility. I examined elfdump carefully, and even wrote a prototype implementation. The added code is small and simple, but the conceptual fit with the rest of elfdump is poor. The result complicates elfdump syntax and documentation, definite signs that this functionality does not belong there. And so, I added this functionality as a new user level command. The elffile Command The syntax for this new command is elffile [-s basic | detail | summary] filename... Please see the elffile(1) manpage for additional details. To demonstrate how output from elffile looks, I will use the following files: FileDescription configA runtime linker configuration file produced with crle dwarf.oAn ELF object /etc/passwdA text file mixed.aArchive containing a mixture of ELF and non-ELF members mixed_elf.aArchive containing ELF objects for different machines not_elf.aArchive containing no ELF objects same_elf.aArchive containing a collection of ELF objects for the same machine. This is the most common type of archive. The file utility identifies these files as follows: % file config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: ascii text mixed.a: current ar archive, 32-bit symbol table mixed_elf.a: current ar archive, 32-bit symbol table not_elf.a: current ar archive same_elf.a: current ar archive, 32-bit symbol table By default, elffile uses its "summary" output style. This output differs from the output from the file utility in 2 significant ways: Files that are not an ELF object, archive, or runtime linker configuration file are identified as "non-ELF", whereas the file utility attempts further identification for such files. When applied to an archive, the elffile output includes a description of the archive's contents, without requiring member extraction or other additional steps. Applying elffile to the above files: % elffile config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: non-ELF mixed.a: current ar archive, 32-bit symbol table, mixed ELF and non-ELF content mixed_elf.a: current ar archive, 32-bit symbol table, mixed ELF content not_elf.a: current ar archive, non-ELF content same_elf.a: current ar archive, 32-bit symbol table, ELF 64-bit LSB relocatable AMD64 Version 1 The output for same_elf.a is of particular interest: The vast majority of archives contain only ELF objects for a single platform, and in this case, the default output from elffile answers both of the questions about archives posed at the beginning of this discussion, in a single efficient step. This makes elffile considerably more useful than file, within the realm of linker-related files. elffile can produce output in two other styles, "basic", and "detail". The basic style produces output that is the same as that from 'file', for linker-related files. The detail style produces per-member identification of archive contents. This can be useful when the archive contents are not homogeneous ELF object, and more information is desired than the summary output provides: % elffile -s detail mixed.a mixed.a: current ar archive, 32-bit symbol table mixed.a(dwarf.o): ELF 32-bit LSB relocatable 80386 Version 1 mixed.a(main.c): non-ELF content mixed.a(main.o): ELF 64-bit LSB relocatable AMD64 Version 1 [SSE]

    Read the article

  • struts2-json-plugin not retrieving json data from action class for Struts-JQuery-Plugin grid

    - by thebravedave
    Hello, Im having an issue getting json working with the struts-jquery-plugin-2.1.0 I have included the struts2-json-plugin-2.1.8.1 in my classpath as well. Im sure that I have my struts-jquery-plugin configured correctly because the grid loads, but doesnt load the data its supposed to get from the action class that has been json'ized. The documentation with the json plugin and the struts-jquery plugin leaves ALOT of gaps that I cant even find with examples/tutorials, so I come to the community at stackoverflow. My action class has a property called gridModel thats a List with a basic POJO called Customer. Customer is a pojo with one property, id. I have a factory that supplies the populated List to the actions List property which i mentioned called gridModel. Heres how i set up my struts.xml file: <constant name="struts.devMode" value="true"/> <constant name="struts.objectFactory" value="guice"/> <package name="org.webhop.ywdc" namespace="/" extends="struts-default,json-default"> <result-types> <result-type name="json" class="com.googlecode.jsonplugin.JSONResult"> </result-type> </result-types> <action name="login" class="org.webhop.ywdc.LoginAction" > <result type="json"></result> <result name="success" type="dispatcher">/pages/uiTags/Success.jsp</result> <result name="error" type="redirect">/pages/uiTags/Login.jsp</result> <interceptor-ref name="cookie"> <param name="cookiesName">JSESSIONID</param> </interceptor-ref> </action> <action name="logout" class="org.webhop.ywdc.LogoutAction" > <result name="success" type="redirect">/pages/uiTags/Login.jsp</result> </action> </package> In the struts.xml file i set the and in my action i listed in the action configuration. Heres my jsp page that the action loads: <%@ taglib prefix="s" uri="/struts-tags" % <%@ taglib prefix="sj" uri="/struts-jquery-tags"% <%@ taglib prefix="sjg" uri="/struts-jquery-grid-tags"% <%@ page language="java" contentType="text/html" import="java.util.*"% Welcome, you have logged in! <s:url id="remoteurl" action="login"/> <sjg:grid id="gridtable" caption="Customer Examples" dataType="json" href="%{remoteurl}" pager="false" gridModel="gridModel" > <sjg:gridColumn name="id" key="true" index="id" title="ID" formatter="integer" sortable="false"/> </sjg:grid> Welcome, you have logged in. <br /> <b>Session Time: </b><%=new Date(session.getLastAccessedTime())%> <h2>Password:<s:property value="password"/></h2> <h2>userId:<s:property value="userId"/></h2> <br /> <a href="<%= request.getContextPath() %>/logout.action">Logout</a><br /><br /> ID: <s:property value="id"/> session id: <s:property value="JSESSIONID"/> </body> Im not really sure how to tell what json the json plugin is creating from the action class. If i did know how i could tell if it wasnt formed properly. As far as I know if I specificy in my action configuration in struts.xml, that the grid, which is set to read json and knows to look for "gridModel" will then automatically load the json to the grid, but its not. Heres my action class: public class LoginAction extends ActionSupport { public String JSESSIONID; public int id; private String userId; private String password; public Members member; public List<Customer> gridModel; public String execute() { Cookie cookie = new Cookie("ywdcsid", password); cookie.setMaxAge(3600); HttpServletResponse response = ServletActionContext.getResponse(); response.addCookie(cookie); HttpServletRequest request = ServletActionContext.getRequest(); Cookie[] ckey = request.getCookies(); for(Cookie c: ckey) { System.out.println(c.getName() + "/cookie_name + " + c.getValue() + "/cookie_value"); } Map requestParameters = ActionContext.getContext().getParameters();//getParameters(); String[] testString = (String[])requestParameters.get("password"); String passwordString = testString[0]; String[] usernameArray = (String[])requestParameters.get("userId"); String usernameString = usernameArray[0]; Injector injector = Guice.createInjector(new GuiceModule()); HibernateConnection connection = injector.getInstance(HibernateConnection.class); AuthenticationServices currentService = injector.getInstance(AuthenticationServices.class); currentService.setConnection(connection); currentService.setInjector(injector); member = currentService.getMemberByUsernamePassword(usernameString, passwordString); userId = member.getUsername(); password = member.getPassword(); CustomerFactory customerFactory = new CustomerFactory(); gridModel = customerFactory.getCustomers(); if(member == null) { return ERROR; } else { id = member.getId(); Map session = ActionContext.getContext().getSession(); session.put(usernameString, member); return SUCCESS; } } public String logout() throws Exception { Map session = ActionContext.getContext().getSession(); session.remove("logged-in"); return SUCCESS; } public List<Customer> getGridModel() { return gridModel; } public void setGridModel(List<Customer> gridModel) { this.gridModel = gridModel; } public String getPassword() { return password; } public void setPassword(String password) { this.password = password; } public String getUserId() { return userId; } public void setUserId(String userId) { this.userId = userId; } public String getJSESSIONID() { return JSESSIONID; } public void setJSESSIONID(String jsessionid) { JSESSIONID = jsessionid; } } Please help me with this problem. You will make my week, as this is a major bottleneck for me :( thanks so much, thebravedave

    Read the article

  • How to retrive message list from p2p

    - by cre-johnny07
    Hello friends I have a messaging system that uses p2p. Each peer has a incoming message list and a outgoing message list. What I need to do is whenever a new peer will join the mesh he will get the all the incoming messages from other peers and add those into it's own incoming message list. Now I know when I get the other peer info from I can ask them to give their own list to me. But I'm not finding the way how..? Any suggestion on this or help would be highly appreciated. I'm giving my code below. Thanking in Advance Johnny #region Instance Fields private string strOrigin = ""; //the chat member name private string m_Member; //the channel instance where we execute our service methods against private IServerChannel m_participant; //the instance context which in this case is our window since it is the service host private InstanceContext m_site; //our binding transport for the p2p mesh private NetPeerTcpBinding m_binding; //the factory to create our chat channel private ChannelFactory<IServerChannel> m_channelFactory; //an interface provided by the channel exposing events to indicate //when we have connected or disconnected from the mesh private IOnlineStatus o_statusHandler; //a generic delegate to execute a thread against that accepts no args private delegate void NoArgDelegate(); //an object to hold user details private IUserService userService; //an Observable Collection of object to get all the Application Instance Details in databas ObservableCollection<AppLoginInstance> appLoginInstances; // an Observable Collection of object to get all Incoming Messages types ObservableCollection<MessageType> inComingMessageTypes; // an Observable Collection of object to get all Outgoing Messages ObservableCollection<PDCL.ERP.DataModels.Message> outGoingMessages; // an Observable Collection of object to get all Incoming Messages ObservableCollection<PDCL.ERP.DataModels.Message> inComingMessages; //an Event Aggregator to publish event for other modules to subscribe private readonly IEventAggregator eventAggregator; /// <summary> /// an IUnityCOntainer to get the container /// </summary> private IUnityContainer container; private RefreshConnectionStatus refreshConnectionStatus; private RefreshConnectionStatusEventArgs args; private ReplyRequestMessage replyMessageRequest; private ReplyRequestMessageEventArgs eventsArgs; #endregion public P2pMessageService(IUserService UserService, IEventAggregator EventAggregator, IUnityContainer container) { userService = UserService; this.container = container; appLoginInstances = new ObservableCollection<AppLoginInstance>(); inComingMessageTypes = new ObservableCollection<MessageType>(); inComingMessages = new ObservableCollection<PDCL.ERP.DataModels.Message>(); outGoingMessages = new ObservableCollection<PDCL.ERP.DataModels.Message>(); this.args = new RefreshConnectionStatusEventArgs(); this.eventsArgs = new ReplyRequestMessageEventArgs(); this.eventAggregator = EventAggregator; this.refreshConnectionStatus = this.eventAggregator.GetEvent<RefreshConnectionStatus>(); this.replyMessageRequest = this.eventAggregator.GetEvent<ReplyRequestMessage>(); } #region IOnlineStatus Event Handlers void ostat_Offline(object sender, EventArgs e) { // we could update a status bar or animate an icon to //indicate to the user they have disconnected from the mesh //currently i don't have a "disconnect" button but adding it //should be trivial if you understand the rest of this code } void ostat_Online(object sender, EventArgs e) { try { m_participant.Join(userService.AppInstance); } catch (Exception Ex) { Logger.Exception(Ex, Ex.TargetSite.Name + ": " + Ex.TargetSite + ": " + Ex.Message); } } #endregion #region IServer Members //this method gets called from a background thread to //connect the service client to the p2p mesh specified //by the binding info in the app.config public void ConnectToMesh() { try { m_site = new InstanceContext(this); //use the binding from the app.config with default settings m_binding = new NetPeerTcpBinding("P2PMessageBinding"); m_channelFactory = new DuplexChannelFactory<IServerChannel>(m_site, "P2PMessageEndPoint"); m_participant = m_channelFactory.CreateChannel(); o_statusHandler = m_participant.GetProperty<IOnlineStatus>(); o_statusHandler.Online += new EventHandler(ostat_Online); o_statusHandler.Offline += new EventHandler(ostat_Offline); //m_participant.InitializeMesh(); //this.appLoginInstances.Add(this.userService.AppInstance); BackgroundWorkerHelper.DoWork<object>(() => { //this is an empty unhandled method on the service interface. //why? because for some reason p2p clients don't try to connect to the mesh //until the first service method call. so to facilitate connecting i call this method //to get the ball rolling. m_participant.InitializeMesh(); //SynchronizeMessage(this.inComingMessages); return new object(); }, arg => { }); this.appLoginInstances.Add(this.userService.AppInstance); } catch (Exception Ex) { Logger.Exception(Ex, Ex.TargetSite.Name + ": " + Ex.TargetSite + ": " + Ex.Message); } } public void Join(AppLoginInstance obj) { try { // Adding Instance to the PeerList if (appLoginInstances.SingleOrDefault(a => a.InstanceId == obj.InstanceId)==null) { appLoginInstances.Add(obj); this.refreshConnectionStatus.Publish(new RefreshConnectionStatusEventArgs() { Status = m_channelFactory.State }); } //this will retrieve any new members that have joined before the current user m_participant.SynchronizeMemberList(userService.AppInstance); } catch(Exception Ex) { Logger.Exception(Ex,Ex.TargetSite.Name + ": " + Ex.TargetSite + ": " + Ex.Message); } } /// <summary> /// Synchronizes member list /// </summary> /// <param name="obj">The AppLoginInstance Param</param> public void SynchronizeMemberList(AppLoginInstance obj) { //as member names come in we simply disregard duplicates and //add them to the member list, this way we can retrieve a list //of members already in the chatroom when we enter at any time. //again, since this is just an example this is the simplified //way to do things. the correct way would be to retrieve a list //of peernames and retrieve the metadata from each one which would //tell us what the member name is and add it. we would want to check //this list when we join the mesh to make sure our member name doesn't //conflict with someone else try { if (appLoginInstances.SingleOrDefault(a => a.InstanceId == obj.InstanceId) == null) { appLoginInstances.Add(obj); } } catch (Exception Ex) { Logger.Exception(Ex, Ex.TargetSite.Name + ": " + Ex.TargetSite + ": " + Ex.Message); } } /// <summary> /// This methos broadcasts the mesasge to all peers. /// </summary> /// <param name="msg">The whole message which is to be broadcasted</param> /// <param name="securityLevels"> Level of security</param> public void BroadCastMsg(PDCL.ERP.DataModels.Message msg, List<string> securityLevels) { try { foreach (string s in securityLevels) { if (this.userService.IsInRole(s)) { if (this.inComingMessages.Count == 0 && msg.CreatedByApp != this.userService.AppInstanceId) { this.inComingMessages.Add(msg); } else if (this.inComingMessages.SingleOrDefault(a => a.MessageId == msg.MessageId) == null && msg.CreatedByApp != this.userService.AppInstanceId) { this.inComingMessages.Add(msg); } } } } catch (Exception Ex) { Logger.Exception(Ex, Ex.TargetSite.Name + ": " + Ex.TargetSite + ": " + Ex.Message); } } /// <summary> /// /// </summary> /// <param name="msg">The Message to denyed</param> public void BroadCastReplyMsg(PDCL.ERP.DataModels.Message msg) { try { //if (this.inComingMessages.SingleOrDefault(a => a.MessageId == msg.MessageId) != null) //{ this.replyMessageRequest.Publish(new ReplyRequestMessageEventArgs() { Message = msg }); this.inComingMessages.Remove(this.inComingMessages.SingleOrDefault(o => o.MessageId == msg.MessageId)); //} } catch (Exception ex) { Logger.Exception(ex, ex.TargetSite.Name + ": " + ex.TargetSite + ": " + ex.Message); } } //again we need to sync the worker thread with the UI thread via Dispatcher public void Whisper(string Member, string MemberTo, string Message) { } public void InitializeMesh() { //do nothing } public void Leave(AppLoginInstance obj) { if (this.appLoginInstances.SingleOrDefault(a => a.InstanceId == obj.InstanceId) != null) { this.appLoginInstances.Remove(this.appLoginInstances.Single(a => a.InstanceId == obj.InstanceId)); } } //public void SynchronizeRemoveMemberList(AppLoginInstance obj) //{ // if (appLoginInstances.SingleOrDefault(a => a.InstanceId == obj.InstanceId) != null) // { // appLoginInstances.Remove(obj); // } //} #endregion

    Read the article

  • could corosync can support unicast heartbeat mode?

    - by Emre He
    could corosync can support unicast heartbeat mode? from another thread in serverfault, some guy raised below corosync conf: totem { version: 2 secauth: off interface { member { memberaddr: 10.xxx.xxx.xxx } member { memberaddr: 10.xxx.xxx.xxx } ringnumber: 0 bindnetaddr: 10.xxx.xxx.xxx mcastport: 694 } transport: udpu } is this conf type means unicast mode? thanks, Emre

    Read the article

  • Remove folder structure from archive and fix error

    - by Michael
    I am trying to make a script to backup each of my plesk hosts to individual files, I am having two problems: I would like to remove the folder structure from archive, the tar is 3 folders deep I am getting this error: tar: Removing leading `/' from member names The code: FILES=/var/www/vhosts/* FNAME="" for f in $FILES do FNAME=`basename $f` tar cfv "/root/backup/ftp/$FNAME.tar" $f done Sample output: tar: Removing leading `/' from member names /var/www/vhosts/mydomain.com/ /var/www/vhosts/mydomain.com/conf /var/www/vhosts/mydomain.com/etc/ /var/www/vhosts/mydomain.com/etc/group /var/www/vhosts/mydomain.com/etc/termcap /var/www/vhosts/mydomain.com/etc/passwd /var/www/vhosts/mydomain.com/usr/

    Read the article

  • Group traffic shaping with traffic control?

    - by mmcbro
    I'm trying to limit the output bandwidth generated by an application with linux tc. This application sends me the source port of the request that I use has a filter to limit each user at a given downloadspeed. I feel that my setup could be managed way better if I had a better knowledge of linux tc. At the application level users are categorized as members of a group, each group have a limited bandwidth. Example : Members of group A : 512kbit/s Members of group B : 1Mbit/s Members of group C : 2Mbit/s When a user connects to the application, it retrieves the source port to the origin of the request from the user and sends me the source port and the bandwidth at which the user must be limited depending on group to which it belongs. With these informations I must add the appropriate rules so that the user (the source port in reality) is limited to the right bandwidth. If the user that connect isn't a member of any group it should be limited at a default bandwidth speed. I'm actually managing this by using a self made daemon that add or remove rules from when it receive a request from the application. With my little knowledge of tc I'm not able to limit other users (ones that aren't in a group, all others in fact) at a default speed and my configuration seems awful to me. Here is the base of my tc qdisc and classes : tc qdisc add dev eth0 root handle 1: htb tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbps ceil 125mbps To classify a user at a given speed I have to add one subclass and then associate one filter to it : # a member of group A tc class add dev eth0 parent 1:1 classid 1:11 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 50001 flowid 1:11 # a member of group A again tc class add dev eth0 parent 1:1 classid 1:12 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 61524 flowid 1:12 # a member of group B again tc class add dev eth0 parent 1:1 classid 1:13 htb rate 1000kbps ceil 1000kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 57200 flowid 1:13 I already know that a source port could be the same if its coming from a different IP address the thing is the application is behind a proxy so I don't have to manage any IP address in that situation. I would like to know how to manage the fact that for all other users (request/source port, whatever you name it) could be limited at a given speed each. I mean that each connection should be able to use at max 100kbit/s for example, not a shared 100kbit/s. I also would like to know if there is a way to simplify my rules. I don't know if it is possible to use only one class per group and associate multiple filters to the same class so each users could be handled by one class and not one class per user. I appreciate any advice, thanks.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >