Search Results

Search found 9563 results on 383 pages for 'insertion sort'.

Page 110/383 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Are the formatted addresses of a Google location unique?

    - by Hans
    I want our users of a web site to be able to either search and pick an address or mark a location on a map decide how accurate this address/location is I am in the process of implementing the first part with jquery, jquery ui's autocomplete, google map, and google geocoder. For the second part I will generate a radiobutton list based on the address elements/alternatives of the first part on the client side with jquery. My concern, however, is how to convey the choices to the server side. The Google geocoder includes a number of useful metadata that I want to store. A possibility is to store the complete json object in a hidden form field, but I can't trust the users. Such a solution would enable an unfriendly insertion of spam in the data. If the addresses/locations would have a unique identifyer I could store just these and let the server refetch/evaluate the data. The alternative geonames.org web service has such ids. But are for example the formatted addresses of a Google location unique? Any tips?

    Read the article

  • Doubly Linked Lists Implementation

    - by user552127
    Hi All, I have looked at most threads here about Doubly linked lists but still unclear about the following. I am practicing the Goodrich and Tamassia book in Java. About doubly linked lists, please correct me if I am wrong, it is different from a singly linked list in that a node could be inserted anywhere, not just after the head or after the tail using both the next and prev nodes available, while in singly linked lists, this insertion anywhere in the list is not possible ? If one wants to insert a node in a doubly linked list, then the default argument should be either the node after the to-be inserted node or node before the to-be inserted node ? But if this is so, then I don't understand how to pass the node before or after. Should we be displaying all nodes that were inserted till now and ask the user to select the node before or after which some new node is to be inserted ? My doubt is how to pass this default node. Because I assume that will require the next and prev nodes of these nodes as well. For e.g, Head<-A<-B<-C<-D<-E<-tail If Z is the new node to be inserted after say D, then how should node D be passed ? I am confused with this though it seems pretty simple to most. But pl do explain. Thanks, Sanjay

    Read the article

  • Sql Server 2005 multiple insert with c#

    - by bottlenecked
    Hello. I have a class named Entry declared like this: class Entry{ string Id {get;set;} string Name {get;set;} } and then a method that will accept multiple such Entry objects for insertion into the database using ADO.NET: static void InsertEntries(IEnumerable<Entry> entries){ //build a SqlCommand object using(SqlCommand cmd = new SqlCommand()){ ... const string refcmdText = "INSERT INTO Entries (id, name) VALUES (@id{0},@name{0});"; int count = 0; string query = string.Empty; //build a large query foreach(var entry in entries){ query += string.Format(refcmdText, count); cmd.Parameters.AddWithValue(string.Format("@id{0}",count), entry.Id); cmd.Parameters.AddWithValue(string.Format("@name{0}",count), entry.Name); count++; } cmd.CommandText=query; //and then execute the command ... } } And my question is this: should I keep using the above way of sending multiple insert statements (build a giant string of insert statements and their parameters and send it over the network), or should I keep an open connection and send a single insert statement for each Entry like this: using(SqlCommand cmd = new SqlCommand(){ using(SqlConnection conn = new SqlConnection(){ //assign connection string and open connection ... cmd.Connection = conn; foreach(var entry in entries){ cmd.CommandText= "INSERT INTO Entries (id, name) VALUES (@id,@name);"; cmd.Parameters.AddWithValue("@id", entry.Id); cmd.Parameters.AddWithValue("@name", entry.Name); cmd.ExecuteNonQuery(); } } } What do you think? Will there be a performance difference in the Sql Server between the two? Are there any other consequences I should be aware of? Thank you for your time!

    Read the article

  • How to write a "thread safe" function in C ?

    - by Andrei Ciobanu
    Hello I am writing some data structures in C, and I've realized that their associated functions aren't thread safe. The i am writing code uses only standard C, and I want to achieve some sort of 'synchronization'. I was thinking to do something like this: enum sync_e { TRUE, FALSE }; typedef enum sync_e sync; struct list_s { //Other stuff struct list_node_s *head; struct list_node_s *tail; enum sync_e locked; }; typedef struct list_s list; , to include a "boolean" field in the list structure that indicates the structures state: locked, unlocked. For example an insertion function will be rewritten this way: int list_insert_next(list* l, list_node *e, int x){ while(l->locked == TRUE){ /* Wait */ } l->locked = TRUE; /* Insert element */ /* -------------- */ l->locked = FALSE; return (0); } While operating on the list the 'locked' field will be set to TRUE, not allowing any other alterations. After operation completes the 'locked' field will be again set to 'TRUE'. Is this approach good ? Do you know other approaches (using only standard C).

    Read the article

  • Application Code Redesign to reduce no. of Database Hits from Performance Perspective

    - by Rachel
    Scenario I want to parse a large CSV file and inserts data into the database, csv file has approximately 100K rows of data. Currently I am using fgetcsv to parse through the file row by row and insert data into Database and so right now I am hitting database for each line of data present in csv file so currently database hit count is 100K which is not good from performance point of view. Current Code: public function initiateInserts() { //Open Large CSV File(min 100K rows) for parsing. $this->fin = fopen($file,'r') or die('Cannot open file'); //Parsing Large CSV file to get data and initiate insertion into schema. while (($data=fgetcsv($this->fin,5000,";"))!==FALSE) { $query = "INSERT INTO dt_table (id, code, connectid, connectcode) VALUES (:id, :code, :connectid, :connectcode)"; $stmt = $this->prepare($query); // Then, for each line : bind the parameters $stmt->bindValue(':id', $data[0], PDO::PARAM_INT); $stmt->bindValue(':code', $data[1], PDO::PARAM_INT); $stmt->bindValue(':connectid', $data[2], PDO::PARAM_INT); $stmt->bindValue(':connectcode', $data[3], PDO::PARAM_INT); // Execute the statement $stmt->execute(); $this->checkForErrors($stmt); } } I am looking for a way wherein instead of hitting Database for every row of data, I can prepare the query and than hit it once and populate Database with the inserts. Any Suggestions !!! Note: This is the exact sample code that I am using but CSV file has more no. of field and not only id, code, connectid and connectcode but I wanted to make sure that I am able to explain the logic and so have used this sample code here. Thanks !!!

    Read the article

  • Is it necessary to mysql real escape when using alter table?

    - by cgwebprojects
    I noticed the other day that I cannot bind variables when using PDO with ALTER TABLE for example the following example will not work, $q = $dbc -> prepare("ALTER TABLE emblems ADD ? TINYINT(1) UNSIGNED NOT NULL DEFAULT '0', ADD ? DATETIME NOT NULL"); $q -> execute(array($emblemDB, $emblemDB . 'Date')); So is it necessary to use mysql_real_escape string and do it like below, // ESCAPE NAME FOR MYSQL INSERTION $emblemDB = mysql_real_escape_string($emblemDB); // INSERT EMBLEM DETAILS INTO DATABASE $q = $dbc -> prepare("ALTER TABLE emblems ADD " . $emblemDB . " TINYINT(1) UNSIGNED NOT NULL DEFAULT '0', ADD " . $emblemDB . "Date DATETIME NOT NULL"); $q -> execute(); Or do I not need to add in mysql_real_escape_string? As the only thing the query can do is ADD columns? Thanks

    Read the article

  • How to convert a list object to bigdecimal in prepared statement?

    - by user1103504
    I am using prepared statement for bulk insertion of records. Iam iterating a list which contains values and their dataTypes differ. One of the data type is BigDecimal and when i try to set calling preparedstatement, it is throwing null pointer exception. My code int count = 1; for (int j = 0; j < list.size(); j++) { if(list.get(j) instanceof Timestamp) { ps.setTimestamp(count, (Timestamp) list.get(j)); } else if(list.get(j) instanceof java.lang.Character) { ps.setString(count, String.valueOf(list.get(j))); } else if(list.get(j) instanceof java.math.BigDecimal) { ps.setBigDecimal(count, (java.math.BigDecimal)list.get(j)); } else { ps.setObject(count, list.get(j)); } count++; } I tried 2 ways to convert, casting the object and tried to create a new object of type BigDecimal ps.setBigDecimal(count, new BigDecimal(list.get(j).toString)); both donot solve my problem. It is throwing null pointer exception. help is appreciated. Thanks

    Read the article

  • Entity Framework Validation & usage

    - by kmsellers
    I'm aware there is an AssociationChanged event, however, this event fires after the association is made. There is no AssociationChanging event. So, if I want to throw an exception for some validation reason, how do I do this and get back to my original value? Also, I would like to default values for my entity based on information from other entities but do this only when I know the entitiy is instanced for insertion into the database. How do I tell the difference between that and the object getting instanced because it is about to be populated based on existing data? Am I supposed to know? Is that considiered business logic that should be outside of my entity business logic? If that's the case, then should I be designing controller classes to wrap all these entities? My concern is that if I deliver back an entity, I want the client to get access to the properties, but I want to retain tight control over validations on how they are set, defaulted, etc. Every example I've seen references context, which is outside of my enity partial class validation, right? BTW, I looked at the EFPocoAdapter and for the life of me cannot determine how to populate lists of from within my POCO class... anyone know how I get to the context from a EFPoco Class?

    Read the article

  • When should we use private variables and when should we use properties. Do Backing Fields should be

    - by Shantanu Gupta
    In most of the cases we usually creates a private variable and its corresponding public properties and uses them for performing our functionalities. Everyone has different approach like some people uses properties every where and some uses private variables within a same class as they are private and opens it to be used by external environment by using properties. Suppose I takes a scenario say insertion in a database. I creates some parameters that need to be initialized. I creates 10 private variables and their corresp public properties which are given as private string name; public string Name { get{return name;} set{name=value;} } and so on. In these cases mentioned above, what should be used internal variables or properties. And in those cases like public string Name { get{return name;} set{name=value>5?5:0;} //or any action can be done. this is just an eg. } In such cases what should be done.

    Read the article

  • Trying to get focus onto JTextPane after doubleclicking on JList element (Java)

    - by Alex Cheng
    Hi all. Problem: I have the following JList which I add to the textPane, and show it upon the caret moving. However, after double clicking on the Jlist element, the text gets inserted, but the caret is not appearing on the JTextPane. This is the following code: listForSuggestion = new JList(str.toArray()); listForSuggestion.setSelectionMode(ListSelectionModel.SINGLE_SELECTION); listForSuggestion.setSelectedIndex(0); listForSuggestion.setVisibleRowCount(visibleRowCount); listScrollPane = new JScrollPane(listForSuggestion); MouseListener mouseListener = new MouseAdapter() { @Override public void mouseClicked(MouseEvent mouseEvent) { JList theList = (JList) mouseEvent.getSource(); if (mouseEvent.getClickCount() == 2) { int index = theList.locationToIndex(mouseEvent.getPoint()); if (index >= 0) { Object o = theList.getModel().getElementAt(index); //System.out.println("Double-clicked on: " + o.toString()); //Set the double clicked text to appear on textPane String completion = o.toString(); int num= textPane.getCaretPosition(); textPane.select(num, num); textPane.replaceSelection(completion); textPane.setCaretPosition(num + completion.length()); int pos = textPane.getSelectionEnd(); textPane.select(pos, pos); textPane.replaceSelection(""); textPane.setCaretPosition(pos); textPane.moveCaretPosition(pos); } } theList.clearSelection(); Any idea on how to "de-focus" the selection on the Jlist, or make the caret appear on the JTextPane after the text insertion? I'll elaborate more if this is not clear enough. Please help, thanks!

    Read the article

  • C# Application process hangs after some time

    - by Chris
    Hi, I implemented a simple C# application which inserts about 350000 records into the database. This used to work well and the process took approximately 20 minutes. I created a progress bar which lets you know approximately the progress of the records insertion. When the progress bar reaches about 75% it stops progressing. I have to manually terminate the program as the process doesn't seem to complete. If I use less data (like 10000), the progress bar finishes and the process is completed. However when I try to insert all the records, this won't happen any more. Note that if I wait longer to terminate the program manually, more records would have been inserted. For example, if I terminate the program after 15 minutes, 200000 records are inserted, whereas if I terminate the program after 20 minutes, 250000 records are inserted. This program is using a single thread. In face I can't do anything else until the process is complete. Does this have anything to do with threading or processes? Any feedback will be greatly appreciated. Thanks.

    Read the article

  • Sorted queue with dropping out elements

    - by ffriend
    I have a list of jobs and queue of workers waiting for these jobs. All the jobs are the same, but workers are different and sorted by their ability to perform the job. That is, first person can do this job best of all, second does it just a little bit worse and so on. Job is always assigned to the person with the highest skills from those who are free at that moment. When person is assigned a job, he drops out of the queue for some time. But when he is done, he gets back to his position. So, for example, at some moment in time worker queue looks like: [x, x, .83, x, .7, .63, .55, .54, .48, ...] where x's stand for missing workers and numbers show skill level of left workers. When there's a new job, it is assigned to 3rd worker as the one with highest skill of available workers. So next moment queue looks like: [x, x, x, x, .7, .63, .55, .54, .48, ...] Let's say, that at this moment worker #2 finishes his job and gets back to the list: [x, .91, x, x, .7, .63, .55, .54, .48, ...] I hope the process is completely clear now. My question is what algorithm and data structure to use to implement quick search and deletion of worker and insertion back to his position. For the moment the best approach I can see is to use Fibonacci heap that have amortized O(log n) for deleting minimal element (assigning job and deleting worker from queue) and O(1) for inserting him back, which is pretty good. But is there even better algorithm / data structure that possibly take into account the fact that elements are already sorted and only drop of the queue from time to time?

    Read the article

  • Performance of stored proc when updating columns selectively based on parameters?

    - by kprobst
    I'm trying to figure out if this is relatively well-performing T-SQL (this is SQL Server 2008). I need to create a stored procedure that updates a table. The proc accepts as many parameters as there are columns in the table, and with the exception of the PK column, they all default to NULL. The body of the procedure looks like this: CREATE PROCEDURE proc_repo_update @object_id bigint ,@object_name varchar(50) = NULL ,@object_type char(2) = NULL ,@object_weight int = NULL ,@owner_id int = NULL -- ...etc AS BEGIN update object_repo set object_name = ISNULL(@object_name, object_name) ,object_type = ISNULL(@object_type, object_type) ,object_weight = ISNULL(@object_weight, object_weight) ,owner_id = ISNULL(@owner_id, owner_id) -- ...etc where object_id = @object_id return @@ROWCOUNT END So basically: Update a column only if its corresponding parameter was provided, and leave the rest alone. This works well enough, but as the ISNULL call will return the value of the column if the received parameter was null, will SQL Server optimize this somehow? This might be a performance bottleneck on the application where the table might be updated heavily (insertion will be uncommon so the performance there is not a problem). So I'm trying to figure out what's the best way to do this. Is there a way to condition the column expressions with something like CASE WHEN or something? The table will be indexed up the wazoo as well for read performance. Is this the best approach? My alternative at this point is to create the UPDATE expression in code (e.g. inline SQL) and execute it against the server. This would solve my doubts about performance, but I'd rather leave this in a stored proc if possible.

    Read the article

  • C++ std::vector memory/allocation

    - by aaa
    from a previous question about vector capacity, http://stackoverflow.com/questions/2663170/stdvector-capacity-after-copying, Mr. Bailey said: In current C++ you are guaranteed that no reallocation occurs after a call to reserve until an insertion would take the size beyond the value of the previous call to reserve. Before a call to reserve, or after a call to reserve when the size is between the value of the previous call to reserve and the capacity the implementation is allowed to reallocate early if it so chooses. So, if I understand correctly, in order to assure that no relocation happens until capacity is exceeded, I must do reserve twice? can you please clarify it? I am using vector as a memory stack like this: std::vector<double> memory; memory.reserve(size); memory.insert(memory.end(), matrix.data().begin(), matrix.data().end()); // smaller than size size_t offset = memory.size(); memory.resize(memory.capacity(), 0); I need to guarantee that relocation does not happen in the above. thank you. ps: I would also like to know if there is a better way to manage memory stack in similar manner other than vector

    Read the article

  • Return REF CURSOR to procedure generated data

    - by ThaDon
    I need to write a sproc which performs some INSERTs on a table, and compile a list of "statuses" for each row based on how well the INSERT went. Each row will be inserted within a loop, the loop iterates over a cursor that supplies some values for the INSERT statement. What I need to return is a resultset which looks like this: FIELDS_FROM_ROW_BEING_INSERTED.., STATUS VARCHAR2 The STATUS is determined by how the INSERT went. For instance, if the INSERT caused a DUP_VAL_ON_INDEX exception indicating there was a duplicate row, I'd set the STATUS to "Dupe". If all went well, I'd set it to "SUCCESS" and proceed to the next row. By the end of it all, I'd have a resultset of N rows, where N is the number of insert statements performed and each row contains some identifying info for the row being inserted, along with the "STATUS" of the insertion Since there is no table in my DB to store the values I'd like to pass back to the user, I'm wondering how I can return the info back? Temporary table? Seems in Oracle temporary tables are "global", not sure I would want a global table, are there any temporary tables that get dropped after a session is done?

    Read the article

  • In Rails, how would I include a section of a page only if the rest of the page doesn't match a certain regexp?

    - by Simon
    We have a site with a lot of user-generated content, and we'd like to show Google ads on it. Some of the content is such that we mustn't show the ads on pages containing that content, or else the whole site gets banned. We've come up with a regexp which we think will match all the offending content. So, three approaches come to mind: Render the page once without the ad section, and then insert the ad section into it if it's clean Render the page as normal, and do the insertion in client-side javascript Render the page above the ad section, capturing only the parts of the page that change; make sure there are no changing parts afterwards. Only show the ads if the captured text is clean, and make sure the unchanging, uncaptured parts are well-vetted in advance. The first one seems like it might delay the page rendering for too long; the second seems like it might delay showing the ads too long; and the third seems too fragile. Is there a better approach? If not, which one is the best solution of the three?

    Read the article

  • firefox does not load large size images

    - by Pradeep
    I am stuck with a kind of bug in FF, wherein it’s unable to load images of big size (I have 8 MB size of image) from the server. The loading of image is all fine on IE. I am still looking out for ways to get rid of this problem. I changed server(IIS) settings to allow bigger file sizes. Also, I used “load” event on image using JQuery and tried all sort of options listed here http://api.jquery.com/load-event/, but nothing worked so far. If anyone of you has come across any such similar problem, and a way to resolve it, it would be nice to hear from you Please note: high resolution images are part of the requirement. Code : <style> img { background-color: #FFFFFF; background-image: url(http://eremurus.hyd:8080/QMS/plugin/imagepanner/loader.gif); background-repeat: no-repeat; background-position: center center; } </style> <script src="../plugin/jquery-ui-1.8.7.custom/js/jquery-1.4.4.min.js" type="text/javascript"></script> <script> jQuery(document).ready(function($){ ///var _url = "http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"; // set up the node / element _im =$("#main"); //_im.bind("load",function(){ $(this).fadeIn(); }); // set the src attribute now, after insertion to the DOM //_im.attr('src',_url); $("#main").one("load",function(){ alert('loaded'); }) .each(function(){ if(this.complete){ $(this).trigger("load"); } }); }); </script> </head> <body> <div id="target"><img id='main' src="http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"> </img></div> </body> </html>

    Read the article

  • How do I make a lock that allows only ONE thread to read from the resource ?

    - by mare
    I have a file that holds an integer ID value. Currently reading the file is protected with ReaderWriterLockSlim as such: public int GetId() { _fileLock.EnterUpgradeableReadLock(); int id = 0; try { if(!File.Exists(_filePath)) CreateIdentityFile(); FileStream readStream = new FileStream(_filePath, FileMode.Open, FileAccess.Read); StreamReader sr = new StreamReader(readStream); string line = sr.ReadLine(); sr.Close(); readStream.Close(); id = int.Parse(line); return int.Parse(line); } finally { SaveNextId(id); // increment the id _fileLock.ExitUpgradeableReadLock(); } } The problem is that subsequent actions after GetId() might fail. As you can see the GetId() method increments the ID every single time, disregarding what happens after it has issued an ID. The issued ID might be left hanging (as said, exceptions might occur). As the ID is incremented, some IDs might be left unused. So I was thinking of moving the SaveNextId(id) out, remove it (the SaveNextId() actually uses the lock too, except that it's EnterWriteLock). And call it manually from outside after all the required methods have executed. That brings out another problem - multiple threads might enter the GetId() method before the SaveNextId() gets executed and they might all receive the same ID. I don't want any solutions where I have to alter the IDs after the operation, correcting them in any way because that's not nice and might lead to more problems. I need a solution where I can somehow callback into the FileIdentityManager (that's the class that handles these IDs) and let the manager know that it can perform the saving of the next ID and then release the read lock on the file containing the ID. Essentialy I want to replicate the relational databases autoincrement behaviour - if anything goes wrong during row insertion, the ID is not used, it is still available for use but it also never happens that the same ID is issued. Hopefully the question is understandable enough for you to provide some solutions..

    Read the article

  • SQLite3 database doesn't actually insert data - iPhone

    - by user334934
    I'm trying to add a new entry into my database, but it's not working. There are no errors thrown, and the code that is supposed to be executed after the insertion runs, meaning there are no errors with the query. But still, nothing is added to the database. I've tried both prepared statements and the simpler sqlite3_exec and it's the same result. I know my database is being loaded because the info for the tableview (and subsequent tableviews) are loaded from the database. The connection isn't the problem. Also, the log of the sqlite3_last_insert_rowid(db) returns the correct number for the next row. But still, the information is not saved. Here's my code: db = [Database openDatabase]; NSString *query = [NSString stringWithFormat:@"INSERT INTO lists (name) VALUES('%@')", newField.text]; NSLog(@"Query: %@",query); sqlite3_stmt *statement; if (sqlite3_prepare_v2(db, [query UTF8String], -1, &statement, nil) == SQLITE_OK) { if(sqlite3_step(statement) == SQLITE_DONE){ NSLog(@"You created a new list!"); int newListId = sqlite3_last_insert_rowid(db); MyList *newList = [[MyList alloc] initWithName:newField.text idNumber:[NSNumber numberWithInt:newListId]]; [self.listArray addObject:newList]; [newList release]; [self.tableView reloadData]; sqlite3_finalize(statement); } else { NSAssert1(0, @"Error while inserting data. '%s'", sqlite3_errmsg(db)); } } [Database closeDatabase:db]; Again, no errors have been thrown. The prepare and step statements return SQLITE_OK and SQLITE_DONE respectively, yet nothing happens. Any help is appreciated!

    Read the article

  • No internet via wifi/ethernet after 14.04 upgrade

    - by Rhys Evans
    I have just updated to Ubuntu 14.04 LTS on my laptop, and I seem to be having some internet problems. I have no internet connection through wifi or Ethernet, after both working in the previous version. I am not at all knowledgeable of Ubuntu and its workings, so if you could just tell me what to do, what to show you by just telling me commands etc. I think would be the only way I will understand sorry! I am asking this after many searches, all being in vein after needing a step involving some sort of internet access, which I can't get! So sorry if it has been answered somewhere, if so, please send me there! Cheers This is what I get when using sudo lspci -v: lspci -v

    Read the article

  • Office 365 Essentials - Subscriptions and Licenses

    Should you be planning to move from Exchange to Office 365? If so, why? What sort of license should you get, and should you use cloud identities or federated identities for your users? Free ebook "TortoiseSVN and Subversion Cookbook - Oracle Edition"Use these recipes to work better, faster, and do things you never knew you could do with SVN. If you're new to source control, this book provides a concise guide to getting the most out of Subversion. Download it for free.

    Read the article

  • Coming up with manageable game ideas as a hobbyist game developer

    - by Kragen
    I'm trying to come up with ideas for games to develop - as per the advice on this question I've started jotting down and brainstorming my ideas as I get them, and it has worked relatively well - I now have a growing collection of ideas that I think are relatively original. The trouble is that I'm a solo hobbyist developer so my time is limited (and I have short attention span!) I've decided to set myself a limit of 1 working week (i.e. 35-40 hours) to develop / prototype my game, but all of the ideas that really spark my imagination are far too complex to be achievable in that sort of time (e.g. RTS or RPG style gameplay), and none of my simpler ideas really strike me as being that good (and whenever I get a flash of inspiration I invariably end up making things more complicated!) Am I being too picky - should I just take one of my simpler ideas and have a go?

    Read the article

  • How do you share your craft with non programmers?

    - by EpsilonVector
    Sometimes I feel like a musician who can't play live shows. Programming is a pretty cool skill, and a very broad world, but a lot of it happens "off camera"- in your head, in your office, away from spectators. You can of course talk about programming with other programmers, and there is peer programming, and you do get to create something that you can show to people, but when it comes to explaining to non programmers what is it that you do, or how was your day at work, it's sort of tricky. How do you get the non programmers in your life to understand what is it that you do? NOTE: this is not a repeat of Getting non-programmers to understand the development process, because that question was about managing client expectations.

    Read the article

  • How to Modify Data Security in Fusion Applications

    - by Elie Wazen
    The reference implementation in Fusion Applications is designed with built-in data security on business objects that implement the most common business practices.  For example, the “Sales Representative” job has the following two data security rules implemented on an “Opportunity” to restrict the list of Opportunities that are visible to an Sales Representative: Can view all the Opportunities where they are a member of the Opportunity Team Can view all the Opportunities where they are a resource of a territory in the Opportunity territory team While the above conditions may represent the most common access requirements of an Opportunity, some customers may have additional access constraints. This blog post explains: How to discover the data security implemented in Fusion Applications. How to customize data security Illustrative example. a.) How to discover seeded data security definitions The Security Reference Manuals explain the Function and Data Security implemented on each job role.  Security Reference Manuals are available on Oracle Enterprise Repository for Oracle Fusion Applications. The following is a snap shot of the security documented for the “Sales Representative” Job. The two data security policies define the list of Opportunities a Sales Representative can view. Here is a sample of data security policies on an Opportunity. Business Object Policy Description Policy Store Implementation Opportunity A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team Role: Opportunity Territory Resource Duty Privilege: View Opportunity (Data) Resource: Opportunity A Sales Representative can view opportunity where they are an opportunity sales team member with view, edit, or full access Role: Opportunity Sales Representative Duty Privilege: View Opportunity (Data) Resource: Opportunity Description of Columns Column Name Description Policy Description Explains the data filters that are implemented as a SQL Where Clause in a Data Security Grant Policy Store Implementation Provides the implementation details of the Data Security Grant for this policy. In this example the Opportunities listed for a “Sales Representative” job role are derived from a combination of two grants defined on two separate duty roles at are inherited by the Sales Representative job role. b.) How to customize data security Requirement 1: Opportunities should be viewed only by members of the opportunity team and not by all the members of all the territories on the opportunity. Solution: Remove the role “Opportunity Territory Resource Duty” from the hierarchy of the “Sales Representative” job role. Best Practice: Do not modify the seeded role hierarchy. Create a custom “Sales Representative” job role and build the role hierarchy with the seeded duty roles. Requirement 2: Opportunities must be more restrictive based on a custom attribute that identifies if a Opportunity is confidential or not. Confidential Opportunities must be visible only the owner of the Opportunity. Solution: Modify the (2) data security policy in the above example as follows: A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team and the opportunity is not confidential. Implementation of this policy is more invasive. The seeded SQL where clause of the data security grant on “Opportunity Territory Resource Duty” has to be modified and the condition that checks for the confidential flag must be added. Best Practice: Do not modify the seeded grant. Create a new grant with the modified condition. End Date the seeded grant. c.) Illustrative Example (Implementing Requirement 2) A data security policy contains the following components: Role Object Instance Set Action Of the above four components, the Role and Instance Set are the only components that are customizable. Object and Actions for that object are seed data and cannot be modified. To customize a seeded policy, “A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team”, Find the seeded policy Identify the Role, Object, Instance Set and Action components of the policy Create a new custom instance set based on the seeded instance set. End Date the seeded policies Create a new data security policy with custom instance set c-1: Find the seeded policy Step 1: 1. Find the Role 2. Open 3. Find Policies Step 2: Click on the Data Security Tab Sort by “Resource Name” Find all the policies with the “Condition” as “where they are a territory resource in the opportunity territory team” In this example, we can see there are 5 policies for “Opportunity Territory Resource Duty” on Opportunity object. Step 3: Now that we know the policy details, we need to create new instance set with the custom condition. All instance sets are linked to the object. Find the object using global search option. Open it and click on “condition” tab Sort by Display name Find the Instance set Edit the instance set and copy the “SQL Predicate” to a notepad. Create a new instance set with the modified SQL Predicate from above by clicking on the icon as shown below. Step 4: End date the seeded data security policies on the duty role and create new policies with your custom instance set. Repeat the navigation in step Edit each of the 5 policies and end date them 3. Create new custom policies with the same information as the seeded policies in the “General Information”, “Roles” and “Action” tabs. 4. In the “Rules” tab, please pick the new instance set that was created in Step 3.

    Read the article

  • How do you share your craft with non programmers?

    - by EpsilonVector
    Sometimes I feel like a musician who can't play live shows. Programming is a pretty cool skill, and a very broad world, but a lot of it happens "off camera"- in your head, in your office, away from spectators. You can of course talk about programming with other programmers, and there is peer programming, and you do get to create something that you can show to people, but when it comes to explaining to non programmers what is it that you do, or how was your day at work, it's sort of tricky. How do you get the non programmers in your life to understand what is it that you do? NOTE: this is not a repeat of Getting non-programmers to understand the development process, because that question was about managing client expectations.

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >