Search Results

Search found 5872 results on 235 pages for 'authorize attribute'.

Page 138/235 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Entity Object Extension in Oracle Application R12

    - by Manoj Madhusoodanan
    In this blog I will explain how to perform Entity Object ( EO ) Extension.As a prerequisite please read my previous blog.I am doing this exercise based on PL/SQL EO. Following attributes are part of FndUserEO. Here I will add a validation to UserName attribute "Length should be > 5". Following steps need to perform. 1) Download all files of  "Entity Object Based on PL/SQL" to JDEV_USER_HOME/myprojects and JDEV_USER_HOME/myclasses.If you want to see the content of source java file decompile it and save it in JDEV_USER_HOME/myprojects. 2) Create new Entity Object XXFndUserEO as follows. Include all attributes of parent EO. 3) Add the validation code snippet to XXFndUserEOImpl.java as follows. 4) Create the substitution as follows. 5) Migrate files to $JAVA_TOP. xxcustom.oracle.apps.fnd.user.schema.server.XXFndUserEOImpl.javaxxcustom.oracle.apps.fnd.user.schema.server.XXFndUserEO.xml 6) Migrate the substitution.. 7) Bounce the server. 8) Verify the substitution has applied properly. Access Create User Page and create a User. You can see the validation message if user name length is less than 5. Give User Name as XXCUST4 and verify the table.   The FND_USER has created successfully.

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • Advice on reconciling discordant data

    - by Justin
    Let me support my question with a quick scenario. We're writing an app for family meal planning. We'll produce daily plans with a target calorie goal and meals to achieve it for our nuclear family. Our calorie goal will be calculated for each person from their attributes (gender, age, weight, activity level). The weight attribute is the simplest example here. When Dad (the fascist nerd who is inflicting this on his family) first uses the application he throws approximate values into it for Daughter. He thinks she is 5'2" (157 cm) and 125 lbs (56kg). The next day Mom sits down to generate the menu and looks back over what the bumbling Dad did, quietly fumes that he can never recall anything about the family, and says the value is really 118 lbs! This is the first introduction of the discord. It seems, in this scenario, Mom is probably more correct that Dad. Though both are only an approximation of the actual value. The next day the dear Daughter decides to use the program and sees her weight listed. With the vanity only a teenager could muster she changes the weight to 110 lbs. Later that day the Mom returns home from a doctor's visit the Daughter needed and decides that it would be a good idea to update her Daughter's weight in the program. Hooray, another value, this time 117 lbs. Now how do you reconcile these data points? Measurement error, confidence in parties, bias, and more all confound the data. In some idealized world we'd have a weight authority of some nature providing the one and only truth. How about in our world though? And the icing on the cake is that this single data point changes over time. How have you guys solved or managed this conflict?

    Read the article

  • Website Design; SEO Dilemma

    - by lemonpole
    Okay so I designed a website for a restaurant and the design is aimed mostly to entice the viewer by using images of the restaurant's platters and foods. Not to say that text is totally non-existent but the design makes it hard to have enough keywords. Most keywords are found in the ALT attribute of image tags and a couple of headers. The reason as to why I am in this dilemma? I'm still new to web development and at the time I made the design, I didn't really know much about SEO. So I come here in search of help because I have an idea... Would it be good practice to have hidden SPAN blocks that would help me fill with keywords? For example a hidden SPAN would have text in bold to help with SEO. Of course, I will play it safe and not exploit this technique if it works. I have searched that this may be considered spamming by search engines and some companies are taking measures to prevent this. Thanks in advance!

    Read the article

  • Exadata???DiskGroup

    - by Liu Maclean(???)
    Exadata???Asm Diskgroup ???????: 1.??dcli -g /home/oracle/cell_group -l root cellcli -e list griddisk ????active?griddisk [root@dm01db01 ~]# dcli -g /home/oracle/cell_group -l root cellcli -e list griddisk dm01cel01: DATA_DM01_CD_00_dm01cel01 active dm01cel01: DATA_DM01_CD_01_dm01cel01 active dm01cel01: DATA_DM01_CD_02_dm01cel01 active dm01cel01: DATA_DM01_CD_03_dm01cel01 active dm01cel01: DATA_DM01_CD_04_dm01cel01 active dm01cel01: DATA_DM01_CD_05_dm01cel01 active dm01cel01: DATA_DM01_CD_06_dm01cel01 active dm01cel01: DATA_DM01_CD_07_dm01cel01 active dm01cel01: DATA_DM01_CD_08_dm01cel01 active dm01cel01: DATA_DM01_CD_09_dm01cel01 active dm01cel01: DATA_DM01_CD_10_dm01cel01 active dm01cel01: DATA_DM01_CD_11_dm01cel01 active dm01cel01: DBFS_DG_CD_02_dm01cel01 active dm01cel01: DBFS_DG_CD_03_dm01cel01 active dm01cel01: DBFS_DG_CD_04_dm01cel01 active dm01cel01: DBFS_DG_CD_05_dm01cel01 active dm01cel01: DBFS_DG_CD_06_dm01cel01 active dm01cel01: DBFS_DG_CD_07_dm01cel01 active dm01cel01: DBFS_DG_CD_08_dm01cel01 active dm01cel01: DBFS_DG_CD_09_dm01cel01 active dm01cel01: DBFS_DG_CD_10_dm01cel01 active dm01cel01: DBFS_DG_CD_11_dm01cel01 active dm01cel01: RECO_DM01_CD_00_dm01cel01 active dm01cel01: RECO_DM01_CD_01_dm01cel01 active dm01cel01: RECO_DM01_CD_02_dm01cel01 active dm01cel01: RECO_DM01_CD_03_dm01cel01 active dm01cel01: RECO_DM01_CD_04_dm01cel01 active dm01cel01: RECO_DM01_CD_05_dm01cel01 active dm01cel01: RECO_DM01_CD_06_dm01cel01 active dm01cel01: RECO_DM01_CD_07_dm01cel01 active dm01cel01: RECO_DM01_CD_08_dm01cel01 active dm01cel01: RECO_DM01_CD_09_dm01cel01 active dm01cel01: RECO_DM01_CD_10_dm01cel01 active dm01cel01: RECO_DM01_CD_11_dm01cel01 active dm01cel02: DATA_DM01_CD_00_dm01cel02 active dm01cel02: DATA_DM01_CD_01_dm01cel02 active dm01cel02: DATA_DM01_CD_02_dm01cel02 active dm01cel02: DATA_DM01_CD_03_dm01cel02 active dm01cel02: DATA_DM01_CD_04_dm01cel02 active dm01cel02: DATA_DM01_CD_05_dm01cel02 active dm01cel02: DATA_DM01_CD_06_dm01cel02 active dm01cel02: DATA_DM01_CD_07_dm01cel02 active dm01cel02: DATA_DM01_CD_08_dm01cel02 active dm01cel02: DATA_DM01_CD_09_dm01cel02 active dm01cel02: DATA_DM01_CD_10_dm01cel02 active dm01cel02: DATA_DM01_CD_11_dm01cel02 active dm01cel02: DBFS_DG_CD_02_dm01cel02 active dm01cel02: DBFS_DG_CD_03_dm01cel02 active dm01cel02: DBFS_DG_CD_04_dm01cel02 active dm01cel02: DBFS_DG_CD_05_dm01cel02 active dm01cel02: DBFS_DG_CD_06_dm01cel02 active dm01cel02: DBFS_DG_CD_07_dm01cel02 active dm01cel02: DBFS_DG_CD_08_dm01cel02 active dm01cel02: DBFS_DG_CD_09_dm01cel02 active dm01cel02: DBFS_DG_CD_10_dm01cel02 active dm01cel02: DBFS_DG_CD_11_dm01cel02 active dm01cel02: RECO_DM01_CD_00_dm01cel02 active dm01cel02: RECO_DM01_CD_01_dm01cel02 active dm01cel02: RECO_DM01_CD_02_dm01cel02 active dm01cel02: RECO_DM01_CD_03_dm01cel02 active dm01cel02: RECO_DM01_CD_04_dm01cel02 active dm01cel02: RECO_DM01_CD_05_dm01cel02 active dm01cel02: RECO_DM01_CD_06_dm01cel02 active dm01cel02: RECO_DM01_CD_07_dm01cel02 active dm01cel02: RECO_DM01_CD_08_dm01cel02 active dm01cel02: RECO_DM01_CD_09_dm01cel02 active dm01cel02: RECO_DM01_CD_10_dm01cel02 active dm01cel02: RECO_DM01_CD_11_dm01cel02 active dm01cel03: DATA_DM01_CD_00_dm01cel03 active dm01cel03: DATA_DM01_CD_01_dm01cel03 active dm01cel03: DATA_DM01_CD_02_dm01cel03 active dm01cel03: DATA_DM01_CD_03_dm01cel03 active dm01cel03: DATA_DM01_CD_04_dm01cel03 active dm01cel03: DATA_DM01_CD_05_dm01cel03 active dm01cel03: DATA_DM01_CD_06_dm01cel03 active dm01cel03: DATA_DM01_CD_07_dm01cel03 active dm01cel03: DATA_DM01_CD_08_dm01cel03 active dm01cel03: DATA_DM01_CD_09_dm01cel03 active dm01cel03: DATA_DM01_CD_10_dm01cel03 active dm01cel03: DATA_DM01_CD_11_dm01cel03 active dm01cel03: DBFS_DG_CD_02_dm01cel03 active dm01cel03: DBFS_DG_CD_03_dm01cel03 active dm01cel03: DBFS_DG_CD_04_dm01cel03 active dm01cel03: DBFS_DG_CD_05_dm01cel03 active dm01cel03: DBFS_DG_CD_06_dm01cel03 active dm01cel03: DBFS_DG_CD_07_dm01cel03 active dm01cel03: DBFS_DG_CD_08_dm01cel03 active dm01cel03: DBFS_DG_CD_09_dm01cel03 active dm01cel03: DBFS_DG_CD_10_dm01cel03 active dm01cel03: DBFS_DG_CD_11_dm01cel03 active dm01cel03: RECO_DM01_CD_00_dm01cel03 active dm01cel03: RECO_DM01_CD_01_dm01cel03 active dm01cel03: RECO_DM01_CD_02_dm01cel03 active dm01cel03: RECO_DM01_CD_03_dm01cel03 active dm01cel03: RECO_DM01_CD_04_dm01cel03 active dm01cel03: RECO_DM01_CD_05_dm01cel03 active dm01cel03: RECO_DM01_CD_06_dm01cel03 active dm01cel03: RECO_DM01_CD_07_dm01cel03 active dm01cel03: RECO_DM01_CD_08_dm01cel03 active dm01cel03: RECO_DM01_CD_09_dm01cel03 active dm01cel03: RECO_DM01_CD_10_dm01cel03 active dm01cel03: RECO_DM01_CD_11_dm01cel03 active ??????????griddisk, ?????’cellcli -e drop griddisk’ ?’cellcli -e create griddisk’????griddisk ,??????drop DBFS_DG???griddisk 2.??ASM???create disk group ?????CELL?IP,????????????? [root@dm01db02 ~]# cat /etc/oracle/cell/network-config/cellip.ora cell="192.168.64.131" cell="192.168.64.132" cell="192.168.64.133" SQL> create diskgroup DATA_MAC normal redundancy 2 DISK 3 'o/192.168.64.131/RECO_DM01_CD_*_dm01cel01' 4 ,'o/192.168.64.132/RECO_DM01_CD_*_dm01cel02' 5 ,'o/192.168.64.133/RECO_DM01_CD_*_dm01cel03' 6 attribute 7 'AU_SIZE'='4M', 8 'CELL.SMART_SCAN_CAPABLE'='TRUE', 9 'compatible.rdbms'='11.2.0.2', 10 'compatible.asm'='11.2.0.2' 11 / 3. MOUNT ???DISKGROUP ALTER DISKGROUP DATA_MAC mount ; 4.???crsctl start/stop resource ora.DATA_MAC.dg ?????

    Read the article

  • Why can't WARs share session info?

    - by rvcoutinho
    I have seen several developers looking for a solution for this problem: accessing session information from a different WAR (even when inside the same EAR) - here are some samples: Any way to share session state between different applications in tomcat?, Access session of another web application, different WAR files, shared resources, Tomcat: How to share data between two applications?, What does the crossContext attribute do in Tomcat? Does it enable session sharing? and so on... From all I have searched, there are some specific solutions depending on the container, but it is somehow 'contrary to the specification'. I have also looked through Java EE specification without any luck on finding an answer. Some developers talk about coupling between web applications, but I tend to disagree. What is the reason one would keep WARs inside the same EAR if not coupling? EJBs, for instance, can be accessed locally (even if inside another EJB JAR within the same EAR). More specifically, one of my WARs handles authentication and authorization, and I would like to share this information with other WARs (in the same EAR). I have managed to work around similar problems before by packaging WARs as JARs and putting them into a singular WAR project (WEB-INF/lib). Yet I do not like this solution (it requires a huge effort on servlet naming and so on). And no solution has answered the first (and most important) question: Why can't WARs share session information?

    Read the article

  • How to define implementation details?

    - by woni
    In our project, an assembly combines logic for the IoC-Container, the project internals and the communication layer. The current version evolved to have only internal classes in addin assemblies. My main problem with this approach is, that the entry point is only available over the IoC-Container. It is not possible to use anything else than reflection to initialize the assembly. Everything behind the IoC-Interface is defined as implementation detail and therefore not intended for usages outside. It is well known that you should not test implementation detail (such as private and internal methods), because they should be tested through the public interface. It is also well known, that your tests should not use the IoC-Container to setup the SUTs, because that would result in too much dependencies. So we are using the InternalsVisibleTo-Attribute to make internals visible to our test assemblies and test the so called implementation details. I recognized that one problem could be the mixup between different concerns in that assembly, changing this would make this discussion useless, because classes have to be defined public. Ignoring my concerns with this, isn't the need to test a class enough reason to make it public, the usages of InternalsVisibleTo seems unintended, and a little bit "hacky". The approach to test only against the publicly available IoC-Container is too costly and would result in integration style tests. The pros of using internals are, that the usages are well known and do not have to be implemented like a public method would have to be (documentation, completeness, versioning,...). Is there a solution, to not test against internals, but keep their advantages over public classes, or do we have to redefine what an implementation detail is.

    Read the article

  • Calling a webservice via Javascript

    - by jeroenb
    If you want to consume a webservice, it's not allways necessary to do a postback. It's even not that hard! 1. Webservice You have to add the scriptservice attribute to the webservice. [System.Web.Script.Services.ScriptService]public class PersonsInCompany : System.Web.Services.WebService { Create a WebMethod [WebMethod] public Person GetPersonByFirstName(string name) { List<Person> personSelect = persons.Where(p => p.FirstName.ToLower().StartsWith(name.ToLower())).ToList(); if (personSelect.Count > 0) return personSelect.First(); else return null; } 2. webpage Add reference to your service to your scriptmanager <script type="text/javascript"> function GetPersonInCompany() { var val = document.getElementById("MainContent_TextBoxPersonName"); PersonsInCompany.GetPersonByFirstName(val.value, FinishCallback); } function FinishCallback(result) { document.getElementById("MainContent_LabelFirstName").innerHTML = result.FirstName; document.getElementById("MainContent_LabelName").innerHTML = result.Name; document.getElementById("MainContent_LabelAge").innerHTML = result.Age; document.getElementById("MainContent_LabelCompany").innerHTML = result.Company; } </script> Add some javascript, where you first call your webservice. Classname.Webmethod = PersonsInCompany.GetPersonByFirstName Add a callback to catch the result from the webservice. And use the result to update your page. <script type="text/javascript"> function GetPersonInCompany() { var val = document.getElementById("MainContent_TextBoxPersonName"); PersonsInCompany.GetPersonByFirstName(val.value, FinishCallback); } function FinishCallback(result) { document.getElementById("MainContent_LabelFirstName").innerHTML = result.FirstName; document.getElementById("MainContent_LabelName").innerHTML = result.Name; document.getElementById("MainContent_LabelAge").innerHTML = result.Age; document.getElementById("MainContent_LabelCompany").innerHTML = result.Company; } </script>   If you have any question, feel free to contact me! You can download the code here.

    Read the article

  • In MVC, why can't a model create a view?

    - by MUY Belgium
    I have a web application written in Perl with a controller, some "views" and some "Models". Each "Model" is corresponding to one "View". The controller (one file) creates an Model object corresponding to each view (view is a CGI argument) then retrieve the view from the module it has just created. Indeed, this should be bad thing but can you argue a bit more about it. My first idea was that since the object "Model" depends upon the "view", then the "model" is actually a view. But also the fact that ALL the cgi parameters are passed to the Model causes the "Model" to become not truelly a view but to loose all interest, since it is only related to the current implementation of the web apps. On other words, that the "Model" keep model but loose its "comprehensiveness" ("Model" is not easily understandable). I'm am quite new in project analysis, so please do not be too harsh. Why is this bad? I have made a prototype with the main structures I have understood of this web application, made as short as possible. #Model.pm package Model; import { # this requires an attribute called "view" # and this require an argument which is the cgi params } ... #View1.pm package View1; ... #Model1.pm package ModelView1 ; base Model; use View1; sub new { my $class = shift; my $arg = shift; Model::DoSomething($arg); $self->view = new View1($arg); ... } #controller.cgi my $model = 0; ... $model = new Model1( cgi_param => params() ); #there is severall models here ... print $model->get_view()->get_html();

    Read the article

  • How do I stop video tearing? (Nvidia prop driver, non-compositing window manager)

    - by Chan-Ho Suh
    I have that problem which seemingly afflicts many using the proprietary Nvidia driver: Video tearing: fine horizontal lines (usually near the top of my display) when there is a lot of panning or action in the video. (Note: switching back to the default nouveau driver is not an option, as its seemingly nonexistent power-management drains my battery several times faster) I've tried Totem, Parole, and VLC, and tearing occurs with all of them. The best result has been to use X11 output in VLC, but there is still tearing with relatively moderate action. Hardware: MacBook Air 3,2 -- which has an Nvidia GeForce 320M. There are two common fixes for tearing with Nvidia prop drivers: Turn off compositing, since Nvidia proprietary drivers don't usually play nice with compositing window managers on Linux (Compiz is an exception I'm aware of). But I use an extremely lightweight window manager (Awesome window manager) which is not even capable of compositing (or any cool effects). I also have this problem in Xfce, where I have compositing disabled. Enabling sync to VBlank. To enable this, I set the option in nvidia-settings and then autostart it as nvidia-settings -l with my other autostart programs. This seems to work, because when I run glxgears, I get: $ glxgears Running synchronized to the vertical refresh. The framerate should be approximately the same as the monitor refresh rate. 303 frames in 5.0 seconds = 60.500 FPS 300 frames in 5.0 seconds = 59.992 FPS And when I check the refresh rate using nvidia-settings: $ nvidia-settings -q RefreshRate Attribute 'RefreshRate' (wampum:0.0; display device: DFP-2): 60.00 Hz. All this suggests sync to VBlank is enabled. As I understand it, this is precisely designed to stop tearing, and a lot of people's problem is even getting something like glxgears to output the correct info. I don't understand why it's not working for me. xorg.conf: http://paste.ubuntu.com/992056/ Example of observed tearing::

    Read the article

  • maxItemsInObjectGraph limit required to be changed for server and client

    - by Michael Freidgeim
    We have a wcf service, that expects to return a huge XML data. It worked ok in testing, but in production it failed with error  "Maximum number of items that can be serialized or deserialized in an object graph is '65536'. Change the object graph or increase the MaxItemsInObjectGraph quota."The MSDN article about   dataContractSerializer xml configuration  element  correctly  describes maxItemsInObjectGraph attribute default as 65536, but documentation for of the DataContractSerializer.MaxItemsInObjectGraph property and DataContractJsonSerializer.MaxItemsInObjectGraph Property are talking about Int32.MaxValue, which causes confusion, in particular because Google shows properties articles before configuration articles.When we changed the value in WCF service configuration, it didn't help, because the similar change must be ALSO done on client.There are similar posts:http://stackoverflow.com/questions/6298209/how-to-fix-maxitemsinobjectgraph-error/6298356#6298356You need to set the MaxItemsInObjectGraph on the dataContractSerializer using a behavior on both the client and service. See  for an example.http://devlicio.us/blogs/derik_whittaker/archive/2010/05/04/setting-maxitemsinobjectgraph-for-wcf-there-has-to-be-a-better-way.aspxhttp://stackoverflow.com/questions/2325321/maxitemsinobjectgraph-ignored/4455209#4455209 I had forgot to place this setting in my client app.config file.http://stackoverflow.com/questions/9191167/maximum-number-of-items-that-can-be-serialized-or-deserialized-in-an-object-graphttp://stackoverflow.com/questions/5867304/datacontractjsonserializer-and-maxitemsinobjectgraph?rq=1 -It seems that DataContractJsonSerializer.MaxItemsInObjectGraph has actual default 65536, because there is no configuration for JSON serializer, but  it complains about the limit.I believe that MS should clarify the properties documentation re default limit and make more specific error messages to distinguish server side and client side errors.Note, that as a workaround it's possible to use commonBehaviors section which can be defined only in machine.config:<commonBehaviors> <behaviors> <endpointBehaviors> <dataContractSerializer maxItemsInObjectGraph="..." /> </endpointBehaviors> </behaviors></commonBehaviors>v

    Read the article

  • Proper way to encapsulate a Shader into different modules

    - by y7haar
    I am planning to build a Shader system which can be accessed through different components/modules in C++. Each component has its own functionality like transform-relevated stuff (handle the MVP matrix, ...), texture handler, light calculation, etc... So here's an example: I would like to display an object which has a texture and a toon shading material applied and it should be moveable. So I could write ONE shading program that handles all 3 functionalities and they are accessed through 3 different components (texture-handler, toon-shading, transform). This means I have to take care of feeding a GLSL shader with different uniforms/attributes. This implies to know all necessary uniform locations and attribute locations, that the GLSL shader owns. And it would also necessary to provide different algorithms to calculate the value for each input variable. Similar functions would be grouped together in one component. A possible way would be, to wrap all shaders in a own definition file written in JSON/XML and parse that file in C++ to get all input members and create and compile the resulting GLSL. But maybe there is another way that is not so complex? So I'm searching for a way to build a system like that, but I'm not sure yet which is the best approach.

    Read the article

  • Update Boolean attributes from another controller

    - by sidonstackoverflow
    I have Users controller and session controller . I want to update one user attribute from session controller . How can i do that ?? I am currently using rails 4.0 . Users controller: class UsersController < ApplicationController def show if Spec.find_by_user_id params[:id] @user = User.find(params[:id]) @spec = Spec.find_by_user_id params[:id] else if params[:id] == session[:id] redirect_to spec_edit_path(params[:id]) else redirect_to(community_index_path, {:notice => "Sorry there was an error"}) end end end def index end def new @user = User.new end def create @user = User.new(user_params) if @user.save flash[:success] = "Welcome buddy !" redirect_to @user else render 'new' end end private def user_params params.require(:user).permit(:name, :email, :password, :password_confirmation) end end Sessions Controller : class SessionsController < ApplicationController def new end def create user = User.find_by(email: params[:session][:email]) if user && user.authenticate(params[:session][:password]) session[:user_id] = user.id User.update(user.status, 'true') redirect_to root_url, :notice => 'You successfully logged in ' else flash.now[:error] = 'Invalid email/password combination' # Not quite right! render 'new' end end def destroy session[:user_id] = nil redirect_to root_url, :notice => 'You successfully logged out ' end end In above code when User logged in i just want to update my boolean column status at users table from sessions controller , but i failed . I am thankful to whom would like to answer my question !

    Read the article

  • jQuery 1.4.4 - issue with attr('selected', null)

    - by Renso
    Issue: The code below worked before under version jQuery 1.4.2 but when I upgraded to version 1.4.4 it no longer worked as expected - it did not unselect the list box item, only setting "selectd" worked:         _handleClick: function(elem) {             var self = this; var initElem = this.element;             var checked = $(elem).attr('checked');             var myId = elem.attr('id').replace(initElem.attr('id') + '_chk_', '');             initElem.children('option[value=' + myId + ']').attr('selected', function() {                 if (checked) {                     return 'selected';                 } else { return null; }             });             if ($.isFunction(self.options.onItemSelected)) {                 try {                     self.options.onItemSelected(elem, initElem.children('option').get());                 } catch (ex) {                     if (self.options.allowDebug)                         alert('select function failed: ' + ex.Description);                 }             }         }, Solution: Under jQuery 1.4.4 you need to explicitly remove the attribute as in "removeAttr('selected'):         _handleClick: function(elem) {             var self = this; var initElem = this.element;             var checked = $(elem).is(':checked');             var myId = elem.attr('id').replace(initElem.attr('id') + '_chk_', '');             if (checked) {                 initElem.children('option[value=' + myId + ']').attr('selected', 'selected');             } else {                 initElem.children('option[value=' + myId + ']').removeAttr('selected');             }             if ($.isFunction(self.options.onItemSelected)) {                 try {                     self.options.onItemSelected(elem, initElem.children('option').get());                 } catch (ex) {                     if (self.options.allowDebug)                         alert('select function failed: ' + ex.Description);                 }             }         },

    Read the article

  • Lost access to the unity interface how to fix? (ubuntu 11.10)

    - by Tal Galili
    o.k, this is embarrassing: I have installed Compiz Config Settings Manager and tried to fix it so that the transition time between changing tabs (using alt+tab) will be short. by accident I un-pressed V from something else, and it asked me about a conflict - I pressed the "x" button to close the window and as a result I stopped seeing the unity interface. That is - I can not see any buttons of the left side. I went to the terminal (ctrl+alt+F1) and ran ccsm As a result I got the following error: $ ccsm /usr/lib/python2.7/site-packages/gtk-2.0/gtk/__init__.py:57: GtkWarning: could not open display warnings.warn(str(e), _gtk.Warning) Traceback (most recent call last): File "/usr/bin/ccsm", line 93, in <module> import ccm File "/usr/lib/python2.7/site-packages/ccm/__init__.py", line 1, in <module> from ccm.Conflicts import * File "/usr/lib/python2.7/site-packages/ccm/Conflicts.py", line 26, in <module> from ccm.Constants import * File "/usr/lib/python2.7/site-packages/ccm/Constants.py", line 29, in <module> CurrentScreenNum = gtk.gdk.display_get_default().get_default_screen().get_number() AttributeError: 'NoneType' object has no attribute 'get_default_screen' What should I do next? Thanks.

    Read the article

  • How to setup my texture cordinates correctly in GLSL 150 and OpenGL 3.3?

    - by RubyKing
    I'm trying to do texture mapping in GLSL 150 and OpenGL 3.3 Here are my shaders I've tried my best to get this correct as possible hopefully this is :) I'm guessing you want to know what the problem is well my texture shows but not in its fullest form just one section of it not the full texture on the quad. All I can think of is its the texture cordinates in the main.cpp which is at the bottom of this post. FRAGMENT SHADER #version 150 in vec2 Texcoord_VSPS; out vec4 color; // Values that stay constant for the whole mesh. uniform sampler2D myTextureSampler; //Main Entry Point void main() { // Output color = color of the texture at the specified UV color = texture2D( myTextureSampler, Texcoord_VSPS ); } VERTEX SHADER #version 150 //Position Container in vec3 position; //Container for TexCoords attribute vec2 Texcoord0; out vec2 Texcoord_VSPS; //out vec2 ex_texcoord; //TO USE A DIFFERENT COORDINATE SYSTEM JUST MULTIPLY THE MATRIX YOU WANT //Main Entry Point void main() { //Translations and w Cordinates stuff gl_Position = vec4(position.xyz, 1.0); Texcoord_VSPS = Texcoord0; } LINK TO MAIN.CPP http://pastebin.com/t7Vg9L0k

    Read the article

  • decouple software components via nameconvention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • SQL RDBMS : one query or multiple calls

    - by None None
    After looking around the internet, I decided to create DAOs that returned objects (POJOs) to the calling business logic function/method. For example: a Customer object with a Address reference would be split in the RDBMS into two tables; Customer and ADDRESS. The CustomerDAO would be in charge of joining the data from the two tables and create both an Address POJO and Customer POJO adding the address to the customer object. Finally return the fulll Customer POJO. Simple, however, now i am at a point where i need to join three or four tables and each representing an attribute or list of attributes for the resulting POJO. The sql will include a group by but i will still result with multiple rows for the same pojo, because some of the tables are joining a one to many relationship. My app code will now have to loop through all the rows trying to figure out if the rows are the same with different attributes or if the record should be a new POJO. Should I continue to create my daos using this technique or break up my Pojo creation into multiple db calls to make the code easier to understand and maintain?

    Read the article

  • The [2] table entry '[3]' has no associated entry in the Media table. (error 2602)

    - by derekf
    Coworker started getting the above message in the event log and as dialog during install.  Argument [2] was File and argument [3] was a specific file. Error dialog read   Product: (app name) -- The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2602. Package was a vendor-provided MSI that had been installed administratively, and then a patch (.msp) applied to the administrative install point. With some digging we found that the MSI still had the entries in the media table pointing at the CAB files, and that there were several files at the end of the sequence that did not have corresponding entries in the Media table (last sequence 990 in Media table, last entry in File table had sequence 994).  Attributes on files in the File table all had the msidbFileAttributesCompressed (&16384) attribute set, so they were all expecting to be within the CAB files, but since this was an admin install there were no CAB files. Resolved by clearing the Media table (replace with a single entry: Disk ID 1, LastSequence 994) and going through the file table and subtracting 8192 from each entry to mark files as not compressed.  Tested and worked.

    Read the article

  • How to swap or move 2 string in Array? [on hold]

    - by Wisnu Khazefa
    I have a need to convert .csv file to .dat file. In my problem, there are value pairs, with a name attribute (called Fund) and corresponding numeric value. If the input file has a pair whose value is 0, then that pair (Fund and value) is dropped. The output file should have only those pairs (Fund and value) where the value is non-zero. Here is the prototype of my code. public static void Check_Fund(){ String header = "Text1,Text2,Text3,FUND_UALFND_1,FUND_UALPRC_1,FUND_UALFND_2," +"FUND_UALPRC_2,FUND_UALFND_3,FUND_UALPRC_3,FUND_UALFND_4,FUND_UALPRC_4,FUND_UALFND_5,FUND_UALPRC_5,Text4,Text5,Text6,Text7"; String text = "ABC;CDE;EFG;PRMF;0;PRFF;50;PREF;0;PRCF;0;PRMP;50;TAHU;;BAKWAN;SINGKONG"; String[] head; String[] value; String showText = ""; head = header.split(","); value = text.split(";"); String regex = "\\d+"; String[] fund = {"PREF","PRMF","PRFF","PRCF","PRMP","PDFF","PSEF","PSCB","PSMF","PRGC","PREP"}; for(int i = 0; i < value.length; i++){ for(int j=0;j < fund.length; j++){ if(value[i].equals(fund[j]) && value[i+1].matches(regex)){ if(value[i+1].equals("0")){ value[i] = ""; value[i+1] = ""; } } } showText = showText + head[i] +":" + value[i] + System.lineSeparator(); } System.out.println(showText ); } Expected Result Input: FUND_UALFND_1:PRMF FUND_UALPRC_1:0 FUND_UALFND_2:PRFF FUND_UALPRC_2:50 FUND_UALFND_3:PREF FUND_UALPRC_3:0 FUND_UALFND_4:PRCF FUND_UALPRC_4:0 FUND_UALFND_5:PRMP FUND_UALPRC_5:50 Output: FUND_UALFND_1:PRFF FUND_UALPRC_1:50 FUND_UALFND_2:PRMP FUND_UALPRC_2:50 FUND_UALFND_0: FUND_UALPRC_0: FUND_UALFND_0: FUND_UALPRC_0: FUND_UALFND_0: FUND_UALPRC_0:

    Read the article

  • OAuth + Twitter on Android: Callback fails

    - by Samuh
    My Android application uses Java OAuth library, found here for authorization on Twitter. I am able to get a request token, authorize the token and get an acknowlegement but when the browser tries the call back url to reconnect with my application, it does not use the URL I provide in code, but uses the one I supplied while registering with Twitter. Note: 1. When registering my application with twitter, I provided a hypothetical call back url:http://abz.xyc.com and set the application type as browser. 2. I provided a callback url in my code "myapp" and have added an intent filter for my activity with Browsable category and data scheme as "myapp". 3. URL called when authorizing does contain te callback url, I specified in code. Any idea what I am doing wrong here? Relevant Code: public class FirstActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); OAuthAccessor client = defaultClient(); Intent i = new Intent(Intent.ACTION_VIEW); i.setData(Uri.parse(client.consumer.serviceProvider.userAuthorizationURL + "?oauth_token=" + client.requestToken + "&oauth_callback=" + client.consumer.callbackURL)); startActivity(i); } OAuthServiceProvider defaultProvider() { return new OAuthServiceProvider(GeneralRuntimeConstants.request_token_URL, GeneralRuntimeConstants.authorize_url, GeneralRuntimeConstants.access_token_url); } OAuthAccessor defaultClient() { String callbackUrl = "myapp:///"; OAuthServiceProvider provider = defaultProvider(); OAuthConsumer consumer = new OAuthConsumer(callbackUrl, GeneralRuntimeConstants.consumer_key, GeneralRuntimeConstants.consumer_secret, provider); OAuthAccessor accessor = new OAuthAccessor(consumer); OAuthClient client = new OAuthClient(new HttpClient4()); try { client.getRequestToken(accessor); } catch (Exception e) { e.printStackTrace(); } return accessor; } @Override protected void onResume() { // TODO Auto-generated method stub super.onResume(); Uri uri = this.getIntent().getData(); if (uri != null) { String access_token = uri.getQueryParameter("oauth_token"); } } } // Manifest file <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".FirstActivity" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> <intent-filter> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <data android:scheme="myapp"/> </intent-filter> </activity> </application>

    Read the article

  • why OAuth request_token using openid4java is missing in the google's response?

    - by user454322
    I have succeed using openID and OAuth separately, but I can't make them work together. Am I doing something incorrect: String userSuppliedString = "https://www.google.com/accounts/o8/id"; ConsumerManager manager = new ConsumerManager(); String returnToUrl = "http://example.com:8080/isr-calendar-test-1.0-SNAPSHOT/GAuthorize"; List<DiscoveryInformation> discoveries = manager.discover(userSuppliedString); DiscoveryInformation discovered = manager.associate(discoveries); AuthRequest authReq = manager.authenticate(discovered, returnToUrl); session.put("openID-discoveries", discovered); FetchRequest fetch = FetchRequest.createFetchRequest(); fetch.addAttribute("email","http://schema.openid.net/contact/email",true); fetch.addAttribute("oauth", "http://specs.openid.net/extensions/oauth/1.0",true); fetch.addAttribute("consumer","example.com" ,true); fetch.addAttribute("scope","http://www.google.com/calendar/feeds/" ,true); authReq.addExtension(fetch); destinationUrl = authReq.getDestinationUrl(true); then destinationUrl is https://www.google.com/accounts/o8/ud?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.return_to=http%3A%2F%2Fexample.com%3A8080%2FgoogleTest%2Fauthorize&openid.realm=http%3A%2F%2Fexample.com%3A8080%2FgoogleTest%2Fauthorize&openid.assoc_handle=AMlYA9WVkS_oVNWtczp3zr3sS8lxR4DlnDS0fe-zMIhmepQsByLqvGnc8qeJwypiRQAuQvdw&openid.mode=checkid_setup&openid.ns.ext1=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ext1.mode=fetch_request&openid.ext1.type.email=http%3A%2F%2Fschema.openid.net%2Fcontact%2Femail&openid.ext1.type.oauth=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Foauth%2F1.0&openid.ext1.type.consumer=example.com&openid.ext1.type.scope=http%3A%2F%2Fwww.google.com%2Fcalendar%2Ffeeds%2F&openid.ext1.required=email%2Coauth%2Cconsumer%2Cscope" but in the response from google request_token is missing http://example.com:8080/googleTest/authorize?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fud&openid.response_nonce=2011-11-29T17%3A38%3A39ZEU2iBVXr_zQG5Q&openid.return_to=http%3A%2F%2Fexample.com%3A8080%2FgoogleTest%2Fauthorize&openid.assoc_handle=AMlYA9WVkS_oVNWtczp3zr3sS8lxR4DlnDS0fe-zMIhmepQsByLqvGnc8qeJwypiRQAuQvdw&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ext1%2Cext1.mode%2Cext1.type.email%2Cext1.value.email&openid.sig=5jUnS1jT16hIDCAjv%2BwAL1jopo6YHgfZ3nUUgFpeXlw%3D&openid.identity=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid%3Fid%3DAItOawk8YPjBcnQrqXW8tzK3aFVop63E7q-JrCE&openid.claimed_id=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid%3Fid%3DAItOawk8YPjBcnQrqXW8tzK3aFVop63E7q-JrCE&openid.ns.ext1=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ext1.mode=fetch_response&openid.ext1.type.email=http%3A%2F%2Fschema.openid.net%2Fcontact%2Femail&openid.ext1.value.email=boxiencosi%40gmail.com why?

    Read the article

  • Asp.net MVC and MOSS 2010 integration

    - by Robert Koritnik
    Just a sidenote: I'm not sure whether I should post this to serverfault as well, because some MOSS admin may have some info for me as well? A bit of explanation first (without Asp.net MVC) Is it possible to integrate the two? Is it possible to write an application that would share at least credential information with MOSS? I have to write a MOSS application that has to do with these technologies: MOSS 2010 Personal client certificates authentication (most probably on USB keys) Active Directory Federation Services Separate SQL DB that would serve application specific data (separate as not being part of MOSS DB) How should it work? Users should authenticate using personal certificates into MOSS 2010 There would be a certain part of MOSS that would be related to my custom application This application should only authorize certain users via AD FS - I guess these users should have a certain security claim attached to them This application should manage users (that have access to this app) with additional (app specific) security claims related to this application (as additional application level authorization rights for individual application parts) This application should use custom SQL 2008 DB heavily with its own data This application should have the possibility to integrate with external systems as well (Exchange for instance to inject calendar entries, ERP systems etc) This application should be able to export its data (from its DB) to files. I don't know if it's possible, but it would be nice if the app could add these files to MOSS and attach authorization info to them so only users with sufficient rights would be able to view/open these files. Why Asp.net MVC then? I'm very well versed in Asp.net MVC (also with the latest version) and I haven't done anything on Sharepoint since version 2003 (which doesn't do me no good or prepare me for the latest version in any way shape or form). This project will most probably be a death march project so I would rather write my application as a UI rich Asp.net MVC application and somehow integrate it into MOSS. But not only via a link, because I would like to at least share credentials, so users wouldn't need to re-login when accessing my app. Using Asp.net MVC I would at least have the possibility to finish on time or be less death marching. Is this at all possible? Questions Is it possible to integrate Asp.net MVC into MOSS as described above? If integration is not possible, would it be possible to create a completely MOSS based application that would work as described? Which parts of MOSS 2010 should I use to accomplish what I need?

    Read the article

  • Web P2P video confrence solution

    - by dtroy
    I'm looking for the best possible solution which will allow me to incorporate live video/audio conference between 2 users(only 2 at this point) into a flash gaming platform. The video chat is not just an extra feature, it's the main one. I'm mainly looking at open source implementations or something I'll be able to implement myself, but will consider commercial products if they are exactly what I need. Here are a few things I've looked at, but so far, I didn't find any of them good enough: Flash player 10's P2P capabilities sound promising, but I am aware of the fact that Adobe has not release any information on the RTMFP protocol and that there is no commercial server which supports it at this point. Stream all the video/audio live through a flash server (not p2p), but from my personal experience you don't get a smooth conversation. I think TokBox uses this method Java applets are a possible solution too (to perform p2p), but I don't think it will be a nice and elegant solution to combine them in the game at this point (and requires the user to authorize them). BTW, I couldn't find any useful implementations. So, If you know of any, i'll look into them. Google Gmail Video Chat uses a custom (and proprietary) browser plug-in which does the p2p and streams the video/audio into the flash player. This is a possible solution, but I rather not implement the entire p2p protocol stack + browser plug-in at this stage and concentrate on other aspect of the game itself. I think they are using XMPP based protocol similar to Jingle and they've release a Jingle librarby but without the video confrencing implementation. EDIT: In response to Branden: I am aware of Adobe Stratus. Stratus is a beta, hosted rendezvous service that aids establishing communications between Flash Player endpoints (RTMFP server). This current release of the Stratus is prerelease and is designed for evaluation purposes only. The service is not final. There is no guarantee that the service will continue to exist in the future or any information about the future cost. That's why I don't think it can be used as a commercial solution. At least not yet. I'd appreciate your suggestions and advice. thanks!

    Read the article

  • Using ViewModel Pattern with MVC 2 Strongly Typed HTML Helpers

    - by Brettski
    I am working with ASP.NET MVC2 RC and can't figure out how to get the HTML helper, TextBoxfor to work with a ViewModel pattern. When used on an edit page the data is not saved when UpdateModel() is called in the controller. I have taken the following code examples from the NerdDinner application. Edit.aspx <%@ Language="C#" Inherits="System.Web.Mvc.ViewUserControl<NerdDinner.Models.DinnerFormViewModel>" %> ... <p> // This works when saving in controller (MVC 1) <label for="Title">Dinner Title:</label> <%= Html.TextBox("Title", Model.Dinner.Title) %> <%= Html.ValidationMessage("Title", "*") %> </p> <p> // This does not work when saving in the controller (MVC 2) <label for="Title">Dinner Title:</label> <%= Html.TextBoxFor(model => model.Dinner.Title) %> <%= Html.ValidationMessageFor(model=> model.Dinner.Title) %> </p> DinnerController // POST: /Dinners/Edit/5 [HttpPost, Authorize] public ActionResult Edit(int id, FormCollection collection) { Dinner dinner = dinnerRepository.GetDinner(id); if (!dinner.IsHostedBy(User.Identity.Name)) return View("InvalidOwner"); try { UpdateModel(dinner); dinnerRepository.Save(); return RedirectToAction("Details", new { id=dinner.DinnerID }); } catch { ModelState.AddModelErrors(dinner.GetRuleViolations()); return View(new DinnerFormViewModel(dinner)); } } When the original helper style is used (Http.TextBox) the UpdateModel(dinner) call works as expected and the new values are saved. When the new (MVC2) helper style is used (Http.TextBoxFor) the UpdateModel(dinner) call does not update the values. Yes, the current values are loaded into the edit page on load. Is there something else which I need to add to the controller code for it to work? The new helper works fine if I am just using a model and not a ViewModel pattern. Thank you.

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >