Search Results

Search found 23347 results on 934 pages for 'key storage'.

Page 460/934 | < Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >

  • Mapping composite foreign keys in a many-many relationship, with overlapping components.

    - by Kirk Broadhurst
    I have a Page table and a View table. There is a many-many relationship between these two via a PageView table. Unfortunately all of these tables need to have composite keys (for business reasons). Page has a primary key of (PageCode, Version), View has a primary key of (ViewCode, Version). PageView obviously enough has PageCode, ViewCode, and Version. The FK to Page is (PageCode, Version) and the FK to View is (ViewCode, Version) Makes sense and works, but when I try to map this in Entity framework I get Error 3021: Problem in mapping fragments...: Each of the following columns in table PageView is mapped to multiple conceptual side properties: PageView.Version is mapped to (PageView_Association.View.Version, PageView_Association.Page.Version) So clearly enough, EF is having a complain about the Version column being a common component of the two foreign keys. Obviously I could create a PageVersion and ViewVersion column in the join table, but that kind of defeats the point of the constraint, i.e. the Page and View must have the same Version value. Has anyone encountered this, and is there anything I can do get around it? Thanks!

    Read the article

  • MySQL keeps crashing OS server.. Please help adjust my.ini!

    - by TruMan1
    I have MySQL 5.0 installed on a Windows 2008 machine (3GB RAM). My server crashes on a regular basis (almost once a day) with this error: Changed limits: max_open_files: 2048 max_connections: 800 table_cache: 619 I did not use the heavy InnoDB .ini file, although I am rethinking that I should have? I am worried that big configuration changes will make my current sites stop working. What should I do? Here is my current ini settings: default-character-set=latin1 default-storage-engine=INNODB max_connections=800 query_cache_size=84M table_cache=1520 tmp_table_size=30M thread_cache_size=38 myisam_max_sort_file_size=100G myisam_sort_buffer_size=30M key_buffer_size=129M read_buffer_size=64K read_rnd_buffer_size=256K sort_buffer_size=256K innodb_additional_mem_pool_size=6M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=3M innodb_buffer_pool_size=250M innodb_log_file_size=50M innodb_thread_concurrency=10

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • Hibernate Bi- Directional many to many mapping advice!

    - by Rob
    hi all, i woundered if anyone might be able to help me out. I am trying to work out what to google for (or any other ideas!!) basically i have a bidirectional many to many mapping between a user entity and a club entity (via a join table called userClubs) I now want to include a column in userClubs that represents the role so that when i call user.getClubs() I can also work out what level access they have. Is there a clever way to do this using hibernate or do i need to rethink the database structure? Thank you for any help (or just for reading this far!!) the user.hbm.xml looks a bit like <set name="clubs" table="userClubs" cascade="save-update"> <key column="user_ID"/> <many-to-many column="activity_ID" class="com.ActivityGB.client.domain.Activity"/> </set> the activity.hbm.xml part <set name="members" inverse="true" table="userClubs" cascade="save-update"> <key column="activity_ID"/> <many-to-many column="user_ID" class="com.ActivityGB.client.domain.User"/> </set> The current userClubs table contains the fields id | user_ID | activity_ID I would like to include in there id | user_ID | activity_ID | role and be able to access the role on both sides...

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

  • Linear Performance Scalability with HP San Solutions

    - by Berzemus
    Hi all, I need a San Solution with linear scalability in size as well as in performance. From what I know, with a Modular Smart Array solution such as the P2000/MSA-class solutions from HP, even with a dual controller initial node, I can only increase the size of it, as added nodes come controller-less, so overall performance tends to decrease. On the other hand, the P4000 (lefthand) family of solutions has each of it's nodes have it's own controller, and so when a node is added, storage capacity as well as performance increase. Am I right in all that I say, and is the P4000 the only solution, or have I forgotten something ?

    Read the article

  • Delete throws "deleted object would be re-saved by cascade"

    - by Greg
    I have following model: <class name="Person" table="Person" optimistic-lock="version"> <id name="Id" type="Int32" unsaved-value="0"> <generator class="native" /> </id> <!-- plus some properties here --> </class> <class name="Event" table="Event" optimistic-lock="version"> <id name="Id" type="Int32" unsaved-value="0"> <generator class="native" /> </id> <!-- plus some properties here --> </class> <class name="PersonEventRegistration" table="PersonEventRegistration" optimistic-lock="version"> <id name="Id" type="Int32" unsaved-value="0"> <generator class="native" /> </id> <property name="IsComplete" type="Boolean" not-null="true" /> <property name="RegistrationDate" type="DateTime" not-null="true" /> <many-to-one name="Person" class="Person" column="PersonId" foreign-key="FK_PersonEvent_PersonId" cascade="all-delete-orphan" /> <many-to-one name="Event" class="Event" column="EventId" foreign-key="FK_PersonEvent_EventId" cascade="all-delete-orphan" /> </class> There are no properties pointing to PersonEventRegistration either in Person nor in Event. When I try to delete an entry from PersonEventRegistration, I get the following error: "deleted object would be re-saved by cascade" The problem is, I don't store this object in any other collection - the delete code looks like this: public bool UnregisterFromEvent(Person person, Event entry) { var registrationEntry = this.session .CreateCriteria<PersonEventRegistration>() .Add(Restrictions.Eq("Person", person)) .Add(Restrictions.Eq("Event", entry)) .Add(Restrictions.Eq("IsComplete", false)) .UniqueResult<PersonEventRegistration>(); bool result = false; if (null != registrationEntry) { using (ITransaction tx = this.session.BeginTransaction()) { this.session.Delete(registrationEntry); tx.Commit(); result = true; } } return result; } What am I doing wrong here?

    Read the article

  • Best filesystem choices for NFS storing VMware disk images

    - by mlambie
    Currently we use an iSCSI SAN as storage for several VMware ESXi servers. I am investigating the use of an NFS target on a Linux server for additional virtual machines. I am also open to the idea of using an alternative operating system (like OpenSolaris) if it will provide significant advantages. What Linux-based filesystem favours very large contiguous files (like VMware's disk images)? Alternatively, how have people found ZFS on OpenSolaris for this kind of workload? (This question was originally asked on SuperUser; feel free to migrate answers here if you know how).

    Read the article

  • Comparing all values within a List against each other

    - by Kave
    I am a bit stuck here and can't think further. public struct CandidateDetail { public int CellX { get; set; } public int CellY { get; set; } public int CellId { get; set; } } var dic = new Dictionary<int, List<CandidateDetail>>(); How can I compare each CandidateDetail item against other CandidateDetail items within the same dictionary in the most efficient way? Example: There are three keys for the dictionary: 5, 6 and 1. Therefore we have three entries. now each of these key entries would have a List associated with. In this case let say each of these three numbers has exactly two CandidateDetails items within the list associated to each key. This means in other words we have two 5, two 6 and two 1 in different or in the same cells. I would like to know: if[5].1stItem.CellId == [6].1stItem.CellId = we got a hit. That means we have a 5 and a 6 within the same Cell if[5].2ndItem.CellId == [6].2ndItem.CellId = perfect. We found out that the other 5 and 6 are together within a different cell. if[1].1stItem.CellId == ... Now I need to check the 1 also against the other 5 and 6 to see if the one exists within the previous same two cells or not. Could a Linq expression help perhaps? I am quite stuck here... I don't know...Maybe I am taking the wrong approach. I am trying to solve the "Hidden pair" of the game Sudoku. :) http://www.sudokusolver.eu/ExplainSolveMethodD.aspx Many Thanks, Kave

    Read the article

  • Equivalent Carbon 32-bit call for using in 64-bit application - GetApplicationEventTarget().

    - by Dheeraj
    Hi All, I'm writing a 64-bit Cocoa application. I need to register for global key events. So I wrote this piece of code : - (void)awakeFromNib { EventHotKeyRef gMyHotKeyRef; EventHotKeyID gMyHotKeyID; EventTypeSpec eventType; eventType.eventClass=kEventClassKeyboard; eventType.eventKind=kEventHotKeyPressed; eventType.eventClass=kEventClassKeyboard; eventType.eventKind=kEventHotKeyPressed; InstallApplicationEventHandler(&MyHotKeyHandler,1,&eventType,NULL,NULL); gMyHotKeyID.signature='htk1'; gMyHotKeyID.id=1; RegisterEventHotKey(49, cmdKey+optionKey, gMyHotKeyID, **GetApplicationEventTarget**(), 0, &gMyHotKeyRef); } But since GetApplicationEventTarget() is not supported for 64-bit applications I'm getting errors. If I declare it, then I don't get any errors but the application crashes. Is there any equivalent method for GetApplicationEventTarget() (defined in Carbon framework) to use in 64-bit applications. Or is there any way to get the global key events using cocoa calls? Any help is appreciated. Thanks, Dheeraj.

    Read the article

  • ASP.NET MVC Map String Url To A Route Value Object

    - by mwgriffiths
    I am creating a modular ASP.NET MVC application using areas. In short, I have created a greedy route that captures all routes beginning with {application}/{*catchAll}. Here is the action: // get /application/index public ActionResult Index(string application, object catchAll) { // forward to partial request to return partial view ViewData["partialRequest"] = new PartialRequest(catchAll); // this gets called in the view page and uses a partial request class to return a partial view } Example: The Url "/Application/Accounts/LogOn" will then cause the Index action to pass "/Accounts/LogOn" into the PartialRequest, but as a string value. // partial request constructor public PartialRequest(object routeValues) { RouteValueDictionary = new RouteValueDictionary(routeValues); } In this case, the route value dictionary will not return any values for the routeData, whereas if I specify a route in the Index Action: ViewData["partialRequest"] = new PartialRequest(new { controller = "accounts", action = "logon" }); It works, and the routeData values contains a "controller" key and an "action" key; whereas before, the keys are empty, and therefore the rest of the class wont work. So my question is, how can I convert the "/Accounts/LogOn" in the catchAll to "new { controller = "accounts", action = "logon" }"?? If this is not clear, I will explain more! :) Matt This is the "closest" I have got, but it obviously wont work for complex routes: // split values into array var routeParts = catchAll.ToString().Split(new char[] { '/' }, StringSplitOptions.RemoveEmptyEntries); // feels like a hack catchAll = new { controller = routeParts[0], action = routeParts[1] };

    Read the article

  • Uploading to S3 using Curl

    - by Carl Crawley
    Hi All, I'm currently using cURL to upload a file from my server to S3 using AJAX to call the script. So I have the following: $fullfilepath = '/server/sitepath/files/' . $_POST['file']; $upload_url = 'https://'.$_POST['buckets'].'.s3.amazonaws.com/'; $params = array( 'key'=>$_POST['key'], 'AWSAccessKeyId'=>$_POST['AWSAccessKeyId'], 'acl'=>$_POST['acl'], 'success_action_status'=>$_POST['success_action_status'], 'policy'=>$_POST['policy'], 'signature'=>$_POST['signature'], 'Content-Type'=>$_POST['Content-Type'], 'file'=>"@$fullfilepath" ); $ch = curl_init(); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_URL, $upload_url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $params); $response = curl_exec($ch); curl_close($ch); echo $response; However, I'm getting an S3 error as follows when it posts and I'm unsure why because I'm not passing JSON to it. <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InvalidPolicyDocument</Code><Message>Invalid Policy: Invalid JSON.</Message><RequestId>B29469C6151BE0E8</RequestId><HostId>BFPk6W2kt1b6hTtx0mEq6dWdN/IhO0gNR5bct//7LAOwJxm1C3PrxS4RPv1blzJ8</HostId></Error> I've googled it for the last hour or so and can't seem to figure it out. If I change the order of the Array fields, it gives me a different error - I believe the order of the posted fields is important somehow. any help would be much appreciated! C

    Read the article

  • Personal wiki on usb / the cloud?

    - by drby
    I'm looking for a personal wiki that can be installed on an usb stick and (more importantly) somewhere on the cloud (dropbox). I've looked at the Wiki Matrix, but I really don't care that much about any of the options, so I end up with a choice between ~50 wikis at the end. Tried out TiddlyWiki but there are some things that really annoy me like the fact that all pages get opened on the same page. It really looks like it'd turn into a giant mess pretty quickly. I'd like to have something that's pretty close in terms of appearance and usability to wikipedia. Hierarchical categories for organization would be really nice. And accessible storage (in case I ever want to convert it to something else).

    Read the article

  • Problems when I try to see databases in SQLite

    - by Sabau Andreea
    I created in code a database and two tables: static final String dbName="graficeCirculatie"; static final String ruteTable="Rute"; static final String colRuteId="RutaID"; static final String colRuta="Ruta"; static final String statiaTable="Statia"; static final String colStatiaID="StatiaID"; static final String colIdRuta="IdRuta"; static final String colStatia="Statia"; public DatabaseHelper(Context context) { super(context, dbName, null,33); } public void onCreate(SQLiteDatabase db) { // TODO Auto-generated method stub db.execSQL("CREATE TABLE " + statiaTable + " (" + colStatiaID + " INTEGER PRIMARY KEY , " + colIdRuta + " INTEGER, " + colStatia + " TEXT)"); db.execSQL("CREATE TABLE " + ruteTable + "(" + colRuteId + " INTEGER PRIMARY KEY AUTOINCREMENT, " + colRuta + " TEXT);"); InsertDepts(db); } void InsertDepts(SQLiteDatabase db) { ContentValues cv = new ContentValues(); cv.put(colRuteId, 1); cv.put(colRuta, "Expres8"); db.insert(ruteTable, colRuteId, cv); cv.put(colRuteId, 2); cv.put(colRuta, "Expres2"); db.insert(ruteTable, colRuteId, cv); cv.put(colRuteId, 3); cv.put(colRuta, "Expres3"); db.insert(ruteTable, colRuteId, cv); } Now I want to see tables inputs from command line. I try in this way: C:\Program Files\Android\android-sdk\tools sqlite3 SQLite version 3.7.4 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite sqlite3 graficeCirculatie ... select * from ruteTable; And I got an error: Error: near "squlite3": syntax error. Can someone help me?

    Read the article

  • VPS for Glassfish

    - by Harry Pham
    Our small startup company plan to deploy a web application on Glassfish, I and wonder if some of the experience user out there can answer me couple question. When I shopping for server, I usually look at RAM amount, as GF does required good amount of RAM to run, below are the two sites with significant price different for the same amount of RAM. I wonder why?? Godaddy: http://www.godaddy.com/hosting/virtual-dedicated-servers.aspx?ci=9013 Versus http://entic.net/Servers Does below plan from Godaddy consider good to run GF application. OS: Linux CentOS • RAM: 4 GB • Storage: 60 GB • Bandwidth: 2,000 GB/mo Our web application is a social network, expected to have 2000-4000 users to start with

    Read the article

  • JCo | How to iterate column wise

    - by cedar715
    The data from SAP is returned as a JCo.Table. However, we don't want to display ALL the columns in the VIEW. So, what we have done is, we have created a file called display.xml which has the JCO.Table columns to be displayed. The display.xml is converted to a List and each field is verified if it is present in the display list(see the code below) which is redundant from second row onwards. final Table outputTable = jcoFunction.getTableParameterList(). getTable("OUTPUT_TABLE"); final int numRows = outputTable.getNumRows(); for (int i = 0; i < numRows; i++) { final FieldIterator fields = outputTable.fields(); while (fields.hasNextFields()) { final JCO.Field recordField = fields.nextField(); final String sapFieldName = recordField.getName(); final DisplayFieldDto key = new DisplayFieldDto(sapFieldName); if (displayFields.contains(key)) { System.out.println("recordField.getName() = " + recordField.getName()); final String sapFieldName = (String)recordField.getValue(); } else { // ignore the field. } } } What is the better way to filter the fields in JCo? Can I iterate column wise? Thank you :)

    Read the article

  • Laptop stopped recognizing USB hard drive

    - by vahokif
    Hi, My Packard Bell EasyNote TX86 laptop stopped recognizing my 1 TB Toshiba Store Art hard drive. It worked fine until now, and it still works on other computers. Other USB devices (including storage) work, and I've tried plugging it in every port, to no avail. When I plug it in it spins up, but Windows doesn't react at all (it's not in disk management), Linux doesn't write anything in dmesg and I can't see it in BIOS setup. I didn't use it at all today, apart from plugging it into a freshly-installed Windows 7 machine once (where it worked). What can I do? Which device is to blame here? EDIT: One more thing. I unplugged the drive while the laptop was hibernated. Google says this might be the problem and it might have something to do with resetting the USB Host Controller.

    Read the article

  • How do you efficiently bulk index lookups?

    - by Liron Shapira
    I have these entity kinds: Molecule Atom MoleculeAtom Given a list(molecule_ids) whose lengths is in the hundreds, I need to get a dict of the form {molecule_id: list(atom_ids)}. Likewise, given a list(atom_ids) whose length is in the hunreds, I need to get a dict of the form {atom_id: list(molecule_ids)}. Both of these bulk lookups need to happen really fast. Right now I'm doing something like: atom_ids_by_molecule_id = {} for molecule_id in molecule_ids: moleculeatoms = MoleculeAtom.all().filter('molecule =', db.Key.from_path('molecule', molecule_id)).fetch(1000) atom_ids_by_molecule_id[molecule_id] = [ MoleculeAtom.atom.get_value_for_datastore(ma).id() for ma in moleculeatoms ] Like I said, len(molecule_ids) is in the hundreds. I need to do this kind of bulk index lookup on almost every single request, and I need it to be FAST, and right now it's too slow. Ideas: Will using a Molecule.atoms ListProperty do what I need? Consider that I am storing additional data on the MoleculeAtom node, and remember it's equally important for me to do the lookup in the molecule-atom and atom-molecule directions. Caching? I tried memcaching lists of atom IDs keyed by molecule ID, but I have tons of atoms and molecules, and the cache can't fit it. How about denormalizing the data by creating a new entity kind whose key name is a molecule ID and whose value is a list of atom IDs? The idea is, calling db.get on 500 keys is probably faster than looping through 500 fetches with filters, right?

    Read the article

  • Virtualization deployment for datacenter

    - by bogha
    Hi, my company is going to deploy an IT Infrastructure on a virtual platform, can you please help me with the following: 1- which one do you recommend, Cisco Unified computing system ( cisco + emc + vmware )or HP Blades( virtualization solution + HP Storage )? 2- i Need to install a DNS Server, Web server, cpanel for managing hosting packages and Microsoft layer of product for usingg in the corporate infrastructur ( active directory, Local DNS, Exchange server, DHCP, Global catalog ) what is the minimum requirments for these servers ( in terms of CPU and Memory ) . 3- what is the best way to implement a redundant solution in a virtual environment. thank you

    Read the article

  • Using Partitions for a large MySQL table

    - by user293594
    An update on my attempts to implement a 505,000,000-row table on MySQL on my MacBook Pro: Following the advice given, I have partitioned my table, tr: i UNSIGNED INT NOT NULL, j UNSIGNED INT NOT NULL, A FLOAT(12,8) NOT NULL, nu BIGINT NOT NULL, KEY (nu), key (A) with a range on nu. nu ought to be a real number, but because I only have 6-d.p. accuracy and the maximum value of nu is 30000. I multiplied it by 10^8 made it a BIGINT - I gather one can't use FLOAT or DOUBLE values to PARTITION a MySQL table. Anyway, I have 15 partitions (p0: nu<25,000,000,000, p1: nu<50,000,000,000, etc.). I was thinking that this should speed up a typical to SELECT: SELECT * FROM tr WHERE nu>95000000000 AND nu<100000000000 AND A.>1. to something of the order of the same query on a table consisting of only the data in the relevant partition (<30 secs). But it's taking 30mins+ to return rows for queries within a partition and double that if the query is for rows spanning two (contiguous) partitions. I realise I could just have 15 different tables, and query them separately, but is there a way to do this 'automatically' with partitions? Has anyone got any suggestions?

    Read the article

  • upgrading my computer system for office use

    - by denise ellul
    Presently I have the under-mentioned computer system. What should be changed and upgraded with the following products that I own presently? I am interested in performance issues related to cache memory, bus speed, RAM, CAS latency as well as other considerations. Thanks for your help. Processor (CPU): Intel Celeron Dual Core E3300 2.5 GHz Motherboard: Asus P5QPL-AM G41 Main Memory (RAM): 2 GB Team Elite DDR2 PC8000 Case: Coolermaster RC330 Power Supply Unit: 500W EZ-Cool Standard Storage Device (Hard Drive): 500GB Samsung Video Card: Intel GMA X4500 (On-board) Optical Drive: LG GH22NS50 Sound Card: AC 97 (On – board) Card Reader: Akasa Black TFT Monitor: 19” View Sonic Speakers: Logitech S120 2.0

    Read the article

  • Paypal Encrypted Website payments

    - by John Isaacks
    I am trying to integrate a PayPal Website Payments Standard Cart Upload payment type into my shopping cart. I integrated Google Checkout a while back and I did not find it overly confusing as I do paypal. I am getting info on how to encrypt it from here: https://cms.paypal.com/us/cgi-bin/?&cmd=_render-content&content_ID=developer/e_howto_html_encryptedwebpayments#id08A3I0P017Q Paypal says I need to generate a private key and a public certificate using OpenSSL. I went to OpenSSL and downloaded the latest release, which is just a folder containing various files but I see no application I can use, not sure what to do here. Even if I were to get OpenSSL to generate me a private key and public cert, the next step is to download either an MS or Java command line tool to create the encrypted cart ahead of time with the cart-total, tax, etc. which sounds crazy to me, like I am supposed to manually do this prior to every order?? Obviously I do not know the items in the cart the customer is going to buy before hand so I need this to be done on the fly on my website using PHP. But I am completely lost. There has to be a way to setup dynamic secure cart uploads to paypal. Can someone please point me in the right direction?

    Read the article

  • starting oracle 10g on ubuntu, Listener failed to start.

    - by tsegay
    I have installed oracle 10g on a ubuntu 10.x, This is my first time installation. After installing I tried to start it with the command below. tsegay@server-name:/u01/app/oracle/product/10.2.0/db_1/bin$ lsnrctl LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 29-DEC-2010 22:46:51 Copyright (c) 1991, 2005, Oracle. All rights reserved. Welcome to LSNRCTL, type "help" for information. LSNRCTL> start Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait... TNSLSNR for Linux: Version 10.2.0.1.0 - Production System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1))) TNS-12555: TNS:permission denied TNS-12560: TNS:protocol adapter error TNS-00525: Insufficient privilege for operation Linux Error: 1: Operation not permitted Listener failed to start. See the error message(s) above... my listener.ora file looks like this: # listener.ora Network Configuration File: /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora # Generated by Oracle configuration tools. SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1) (PROGRAM = extproc) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1)) (ADDRESS = (PROTOCOL = TCP)(HOST = acct-vmserver)(PORT = 1521)) ) ) I can guess the problem is with permission issue, But i dont know where I have to do the change on permission. Any help is appreciated ...

    Read the article

  • Separating Javascript functions

    - by msharma
    I am wondering how javascripts get included in a jsp - can we put any code which the jsp will recognize and not just javascript code only in the .js file? I have some common javascript code which needs to get executed on different pages, so I decided to place it in its own separate .js file and include it on all jsps which call that function. The js function now refers to a key from a properties file and some other non-javascript code: function openPrivacyStmntWindow(){ var url = <h:outputText escape="false" value="\"#{urls.url_privacyStatement}\";" /> newwindow=window.open(url,'Terms','height=600,width=800,left=300,top=100,scrollbars=1'); newwindow.focus(); return false; } This function worked just fine when it was included in the jsp itself. Now that I have separated it into its own file it doesnt, do I need to include the properties bundle in this file. The value="\"#{urls.url_privacyStatement}\";" is referring to a bundle called "urls" which has a key called "url_privacyStatement" Also in Line 1 var url = <h:outputText escape="false" value="\"#{urls.url_privacyStatement}\";" /> the <h:outputText escape="false" ... /> will it cause any issues? Thanks.

    Read the article

  • What are some good methods to improve personal password management?

    - by danilo
    I want to improve my personal password management. I usually use secure passwords, but overuse them for too many different places. My questions: What methods do you use to create passwords, e.g. for different online sites/logins? What methods do you use to remember those passwords? Memory? Pen&Paper? Software storage? Is there some good way to store my passwords somewhere, so I can always have access to them when I need them (e.g. a webbased solution on my own server) but at the same way keep them away from unwanted access? Edit: Someone on another site mentioned http://passwordmaker.org/. Have you had any good or bad experiences with that software?

    Read the article

< Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >