Search Results

Search found 32407 results on 1297 pages for 'access violation'.

Page 344/1297 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • New Exadata public references

    - by Javier Puerta
    The following customers are now public references for Exadata. Show your customers how other companies in their industries are leveraging Exadata to achieve their business objectives. MIGROS BANK - Financial Services - Switzerland Oracle EXADATA Database Machine + OBIEE 11gMigros Bank AG Makes Systems More Available and Improves Operational Insight and Analytics with a Scalable, Integrated Data Warehouse Success Story (English)Success Story(German) - Professional Services - United Arab Emirates Oracle EXADATA Database MachineTech Access Drives Compelling Proof-of-Concept Evaluations for Hardware Sales in Regions Largest Solutions CenterSuccess Story   - Saudi Arabia - Wholesale Distribution Oracle EXADATA Database Machine + OBIEE 11g Balubaid Group of Companies Reduces Help-Desk Complaints by 75%, Improves Business Continuity and System Response Success Story   - Nigeria - Communications Oracle EXADATA Database Machine Etisalat Accelerates Data Retrieval and Analysis by 99 Percent with Oracle Communications Data Model Running on Oracle Exadata Database Machine Oracle Press Release   ETISALAT BALUBAID GROUP TECH ACCESS

    Read the article

  • How to refactor a method which breaks "The law of Demeter" principle?

    - by dreza
    I often find myself breaking this principle (not intentially, just through bad design). However recently I've seen a bit of code that I'm not sure of the best approach. I have a number of classes. For simplicity I've taken out the bulk of the classes methods etc public class Paddock { public SoilType Soil { get; private set; } // a whole bunch of other properties around paddock information } public class SoilType { public SoilDrainageType Drainage { get; private set; } // a whole bunch of other properties around soil types } public class SoilDrainageType { // a whole bunch of public properties that expose soil drainage values public double GetProportionOfDrainage(SoilType soil, double blockRatio) { // This method does a number of calculations using public properties // exposed off SoilType as well as the blockRatio value in some conditions } } In the code I have seen in a number of places calls like so paddock.Soil.Drainage.GetProportionOfDrainage(paddock.Soil, paddock.GetBlockRatio()); or within the block object itself in places it's Soil.Drainage.GetProportionOfDrainage(this.Soil, this.GetBlockRatio()); Upon reading this seems to break "The Law of Demeter" in that I'm chaining together these properties to access the method I want. So my thought in order to adjust this was to create public methods on SoilType and Paddock that contains wrappers i.e. on paddock it would be public class Paddock { public double GetProportionOfDrainage() { return Soil.GetProportionOfDrainage(this.GetBlockRatio()); } } on the SoilType it would be public class SoilType { public double GetProportionOfDrainage(double blockRatio) { return Drainage.GetProportionOfDrainage(this, blockRatio); } } so now calls where it used would be simply // used outside of paddock class where we can access instances of Paddock paddock.GetProportionofDrainage() or this.GetProportionOfDrainage(); // if used within Paddock class This seemed like a nice alternative. However now I have a concern over how would I enforce this usage and stop anyone else from writing code such as paddock.Soil.Drainage.GetProportionOfDrainage(paddock.Soil, paddock.GetBlockRatio()); rather than just paddock.GetProportionOfDrainage(); I need the properties to remain public at this stage as they are too ingrained in usage throughout the code block. However I don't really want a mixture of accessing the method on DrainageType directly as that seems to defeat the purpose altogether. What would be the appropiate design approach in this situation? I can provide more information as required to better help in answers. Is my thoughts on refactoring this even appropiate or should is it best to leave it as is and use the property chaining to access the method as and when required?

    Read the article

  • Initialize array in amortized constant time -- what is this trick called?

    - by user946850
    There is this data structure that trades performance of array access against the need to iterate over it when clearing it. You keep a generation counter with each entry, and also a global generation counter. The "clear" operation increases the generation counter. On each access, you compare local vs. global generation counters; if they differ, the value is treated as "clean". This has come up in this answer on Stack Overflow recently, but I don't remember if this trick has an official name. Does it? One use case is Dijkstra's algorithm if only a tiny subset of the nodes has to be relaxed, and if this has to be done repeatedly.

    Read the article

  • MSDN Subscription

    - by Dave Yasko
    My work just started (a couple of months ago actually) allowing us access to our MSDN subscriptions.  This is totally cool.  I’ve been begging, borrowing, and stealing all these years to have access to Visual Studio and the other tools.  Now, I’ve got them all.  It is totally cool.  What’s even cooler is that I’m going to be installing them (along with a brandy spanky new Windows 7 installation) on a MacBook or MacBookPro in the coming weeks.  How’s that for most excellent?

    Read the article

  • Why do Google search results include pages disallowed in robots.txt?

    - by Ilmari Karonen
    I have some pages on my site that I want to keep search engines away from, so I disallowed them in my robots.txt file like this: User-Agent: * Disallow: /email Yet I recently noticed that Google still sometimes returns links to those pages in their search results. Why does this happen, and how can I stop it? Background: Several years ago, I made a simple web site for a club a relative of mine was involved in. They wanted to have e-mail links on their pages, so, to try and keep those e-mail addresses from ending up on too many spam lists, instead of using direct mailto: links I made those links point to a simple redirector / address harvester trap script running on my own site. This script would return either a 301 redirect to the actual mailto: URL, or, if it detected a suspicious access pattern, a page containing lots of random fake e-mail addresses and links to more such pages. To keep legitimate search bots away from the trap, I set up the robots.txt rule shown above, disallowing the entire space of both legit redirector links and trap pages. Just recently, however, one of the people in the club searched Google for their own name and was quite surprised when one of the results on the first page was a link to the redirector script, with a title consisting of their e-mail address followed by my name. Of course, they immediately e-mailed me and wanted to know how to get their address out of Google's index. I was quite surprised too, since I had no idea that Google would index such URLs at all, seemingly in violation of my robots.txt rule. I did manage to submit a removal request to Google, and it seems to have worked, but I'd like to know why and how Google is circumventing my robots.txt like that and how to make sure that none of the disallowed pages will show up in their search results. Ps. I actually found out a possible explanation and solution, which I'll post below, while preparing this question, but I thought I'd ask it anyway in case someone else might have the same problem. Please do feel free to post your own answers. I'd also be interested in knowing if other search engines do this too, and whether the same solutions work for them also.

    Read the article

  • Data Source Security Part 4

    - by Steve Felts
    So far, I have covered Client Identity and Oracle Proxy Session features, with WLS or database credentials.  This article will cover one more feature, Identify-based pooling.  Then, there is one more topic to cover - how these options play with transactions.Identity-based Connection Pooling An identity based pool creates a heterogeneous pool of connections.  This allows applications to use a JDBC connection with a specific DBMS credential by pooling physical connections with different DBMS credentials.  The DBMS credential is based on either the WebLogic user mapped to a database user or the database user directly, based on the “use database credentials” setting as described earlier. Using this feature enabled with “use database credentials” enabled seems to be what is proposed in the JDBC standard, basically a heterogeneous pool with users specified by getConnection(user, password). The allocation of connections is more complex if Enable Identity Based Connection Pooling attribute is enabled on the data source.  When an application requests a database connection, the WebLogic Server instance selects an existing physical connection or creates a new physical connection with requested DBMS identity. The following section provides information on how heterogeneous connections are created:1. At connection pool initialization, the physical JDBC connections based on the configured or default “initial capacity” are created with the configured default DBMS credential of the data source.2. An application tries to get a connection from a data source.3a. If “use database credentials” is not enabled, the user specified in getConnection is mapped to a DBMS credential, as described earlier.  If the credential map doesn’t have a matching user, the default DBMS credential is used from the datasource descriptor.3b. If “use database credentials” is enabled, the user and password specified in getConnection are used directly.4. The connection pool is searched for a connection with a matching DBMS credential.5. If a match is found, the connection is reserved and returned to the application.6. If no match is found, a connection is created or reused based on the maximum capacity of the pool: - If the maximum capacity has not been reached, a new connection is created with the DBMS credential, reserved, and returned to the application.- If the pool has reached maximum capacity, based on the least recently used (LRU) algorithm, a physical connection is selected from the pool and destroyed. A new connection is created with the DBMS credential, reserved, and returned to the application. It should be clear that finding a matching connection is more expensive than a homogeneous pool.  Destroying a connection and getting a new one is very expensive.  If you can use a normal homogeneous pool or one of the light-weight options (client identity or an Oracle proxy connection), those should be used instead of identity based pooling. Regardless of how physical connections are created, each physical connection in the pool has its own DBMS credential information maintained by the pool. Once a physical connection is reserved by the pool, it does not change its DBMS credential even if the current thread changes its WebLogic user credential and continues to use the same connection. To configure this feature, select Enable Identity Based Connection Pooling.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableIdentityBasedConnectionPooling.html  "Enable identity-based connection pooling for a JDBC data source" in Oracle WebLogic Server Administration Console Help. You must make the following changes to use Logging Last Resource (LLR) transaction optimization with Identity-based Pooling to get around the problem that multiple users will be accessing the associated transaction table.- You must configure a custom schema for LLR using a fully qualified LLR table name. All LLR connections will then use the named schema rather than the default schema when accessing the LLR transaction table.  - Use database specific administration tools to grant permission to access the named LLR table to all users that could access this table via a global transaction. By default, the LLR table is created during boot by the user configured for the connection in the data source. In most cases, the database will only allow access to this user and not allow access to mapped users. Connections within Transactions Now that we have covered the behavior of all of these various options, it’s time to discuss the exception to all of the rules.  When you get a connection within a transaction, it is associated with the transaction context on a particular WLS instance. When getting a connection with a data source configured with non-XA LLR or 1PC (using the JTS driver) with global transactions, the first connection obtained within the transaction is returned on subsequent connection requests regardless of the values of username/password specified and independent of the associated proxy user session, if any. The connection must be shared among all users of the connection when using LLR or 1PC. For XA data sources, the first connection obtained within the global transaction is returned on subsequent connection requests within the application server, regardless of the values of username/password specified and independent of the associated proxy user session, if any.  The connection must be shared among all users of the connection within a global transaction within the application server/JVM.

    Read the article

  • Ways to prevent client seeing my code

    - by James Eggers
    I've got a bit of a strange problem.. Basically, I'm building a fairly complex (un-compiled, interpreted) program in Python. I've been working on most of this code for other purposes for a few months, and therefore don't want my client to be able to simple copy and paste it and then try and sell it (it's worth a fair amount). The other problem is that I need the script to run on a server that my client is paying for. Is there any way I can secure a particular folder on the machine from root access, make it so only one particular can access the directory? The OS is Ubuntu.

    Read the article

  • How To Remotely Copy Files Over SSH Without Entering Your Password

    - by YatriTrivedi
    SSH is a life-saver when you need to remotely manage a computer, but did you know you can also upload and download files, too? Using SSH keys, you can skip having to enter passwords and use this for scripts! This process works on Linux and Mac OS, provided that they’re properly configured for SSH access. If you’re using Windows, you can use Cygwin to get Linux-like functionality, and with a little tweaking, SSH will run as well.What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • SQL Server Migration Assistant 2008 (SSMA)

    One of my client’s requirements is to migrate and consolidate his company departments’ databases to SQL Server 2008. As I know the environment, they are using MySQL , MS-Access and SQL Server with different applications. Now the company has decided to have a single dedicated SQL Server 2008 database server to host all the applications. So there are a few things to do to upgrade and migrate from MySQL and MS-Access to SQL Server 2008. For the migration task, I found the SQL Server Migration Assistant 2008 (SSMA 2008) is very useful which reduces the effort and risk of migration. So in this tip, I will do an overview of SSMA 2008. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Can AdSense crawler view pages that require cookies?

    - by moomoochoo
    Details I require users to agree to terms and conditions before they can view several pages on my site. Once they have agreed a cookie is set and they can proceed to the webpage. If a user somehow manages to end up on the webpage without a cookie they will not be able to access the page's content. My question(s) Is the AdSense crawler able to set the cookie and visit these pages? If yes, how will it know to agree to the TOS? Is there some way to allow it access to the pages even if it couldn't use cookies?

    Read the article

  • Database for Ubuntu

    - by Toby J
    I am trying Ubuntu 12.04 booted from a DVD disk before I install it. I currently havwe windows 8 (which I hate) and I have a couple of Data base programs with my movies, hundreds of movies, in them. Is there a database available for Ubuntu that is compatible with Microsoft Database? There are too much data in the current database file for me to have to rewrite the program. So far, I love the Ubuntu 12.04. I have been able to access my microsoft works spreadsheet files and documents with no problem. And I like the thuderbird email and just about everything else about Ubuntu 12.04. I just need to be able to access my database files and to write lables and envelopes. Thanks. Toby J Paris, TN

    Read the article

  • nfs mountpoint named ``share'' breaks ls and man

    - by freddyb
    I mounted a nfs server to ~/share. This works fine as long as I'm at home, where the nfs share is in reach. Whenver I'm not, this seems to break access to all manpages. Using man (or ls in my homedir) waits forever. Checking with strace reveals that they try to access the folder called share. Unmounting fails too. Even with -l (lazy) and -f (force). I am asking for three things here: Is ``share'' a magic name? Does something like MANPATH exist, which I should avoid? How do I unmount without rebooting? (I already commented the share out in fstab) What would you suggest me to do, to have network/position based mounting of NFS shares?

    Read the article

  • Why can't I view a specific websites?

    - by user79263
    I am a netwrok administrator at a small company (22 PC) and i have a problem, and i don't know which could be the cause. I can't access a specific website from the LAN. I have a server which has 2 NICs(one to ISP and one to LAN). I have installed on the server (Bind DNS server and DHCP server) Debian 6 Server. I want to mention that "nslookup mcel.co.mz", and "dig my_site" is not working perfectly, I can't access the site from inside the LAN. I can't send emails to person that are located in that domain. When I dig mcel.co.mz @8.8.8.8 the domain is resolved. Why can't I view a specific websites? www.mcel.co.mz Why can't send email to person that are on some domains? @mcel.co.mz I don't have any rule configured in Debian Server which wouldn't let me acces the site. I am the only one responsable with the IT department. Lowly,

    Read the article

  • Leapfrog Crammer won't mount as a USB flash drive

    - by William
    I can't seem to get the Leapfrog Crammer study and sound system to show up as a flash drive under ubuntu so I can transfer stuff to it. I don't want to install the leapfrog bloatware, can someone help me with this? Additional Information: When I plug my crammer into my computer it shows a 1 MB file system with a link to download the crammer software. I want to know how to access the rest of the crammer's file system so I can transfer music to it. The crammer does not show any other partitions in natulius. According to an article on the internet, the crammed is divided into three partitions: One with a link to install the crammer software, one with all content(music, flash cards, etc.) and one for firmware. I want to know how to access the one with the content so I can add music to the player. Can anyone help me with this? Thanks in advance.

    Read the article

  • Samba user does not have folder read permission

    - by user289455
    I have set up a special user for read only samba shares. I set him up in Samba and as a system user. I shared a couple of folders but that user cannot access them. I know samba is working because I also shared them with the main user of the system which is an admin account and it works fine. How can I allow this user to have read permissions on all the directories I want to share without changing anything for any other users of the system? For example, I don't want to give him ownership of any of the files/directories. Just ongoing recursive read access. ongoing recursive is important. If someone adds a file or directory, I still want him to automatically be able to read it.

    Read the article

  • Stylecop 4.7.39.0 has been released

    - by TATWORTH
    Stylecop  4.7.38.0 has been released at http://stylecop.codeplex.com/releases/view/79972The release notes follow:Allow case sensitivity in the deprecated words and recognised words listStyleing fixes.Fix for documentation spelling checks inside nested xml nodes.Look for CustomDictionary.xml files in the folder of the cs file.Update the TabIndex in the spelling tab.Updating default deprecated words and their alternatives.Add support for specifying dictionary folders in the settings.StyleCop file. Like :Rename StyleCopViolationError to StyleCopHighlightingError and all associated types.Fix the Bulb Item for spelling mistakes to replace matching words correctly.Fix the spelling parser for strings beginning with $$THREADING FIX: Make StyleCop execute analysis in proces and not create 2 threads. Use Countdown Event when we move to .NET 4.Use the naming service for the Culture specified for the project. Pass the actual violation through to ReSharper.Ensure Registry access code works for VS2008 addins.Rollback Registry changes to ensure VS2008 plugin loads correctly.Adding support for preferred alternative words for spelling. Adding deprecated word support into Settings.StyleCop file. Spelling is only checked if Office 2010 is installed. Allow editing of deprecated words and their alternatives in the Settings editor.Adding new resource stringsAdding BulbItem and Quick fixes for spelling errors.Moving StringExtensions to common area.Styling fixes.Report all spelling errors found on a line.Start of 4.7.39.0 dev.

    Read the article

  • Passing class names or objects?

    - by nischayn22
    I have a switch statement switch ( $id ) { case 'abc': return 'Animal'; case 'xyz': return 'Human'; //many more } I am returning class names,and use them to call some of their static functions using call_user_func(). Instead I can also create a object of that class, return that and then call the static function from that object as $object::method($param) switch ( $id ) { case 'abc': return new Animal; case 'xyz': return new Human; //many more } Which way is efficient? To make this question broader : I have classes that have mostly all static methods right now, putting them into classes is kind of a grouping idea here (for example the DB table structure of Animal is given by class Animal and so for Human class). I need to access many functions from these classes so the switch needs to give me access to the class

    Read the article

  • Cloud consolidation handling multi databases

    - by llaszews
    I have spoken about virtualization and the different types of virtualization. Which includes OS zones, application server domains, database schemas, VLANS and other approaches. Another approach is to create a virtually federated database in the cloud. DBSpaces is a company that has a technology to created a virtually federated database in the cloud. DBSpaces is a Virtual Database technology that allows an organisation thru a single Virtual Database access multiple data sources (or database spaces) in real-time. Additionally dbSpaces can be configured to access an organisations data internally using a remote gateway so that their dbSpace is seamless across the Public and Private cloud.

    Read the article

  • Subdomain Is Redirected and Causing an Error Because www. is Added

    - by user532493
    On my site, say example.com, if I try to access test.example.com, Firefox automatically adds www. to test.example.com, making it www.test.example.com, which causes an error. However, if I visit a site like my.ebay.com, no www. is added so no error occurs. What's going on? Just in case, my .htaccess file is as follows: Options -Multiviews RewriteEngine On RewriteCond %{HTTP_HOST} ^example.com RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ $1.php I looked through Firebug and it seems like Firefox doesn't even make an attempt to reach test.example.com before reverting to www.test.example.com. One nuance I didn't realize until now. If I try to access the site using test.example.com/, the www. is added. If however that trailing slash is not there, then I am sent to the subdomain properly.

    Read the article

  • External hard drive failing, is backup recovery possible?

    - by backitup
    I have a Seagate Freeagent external hard drive. While I was backing it up on Windows XP my pc crashed and I received the horrifying "Delayed Write Error" when I rebooted citing "$MFT" and a few other files. I tried to unmount it, but to no avail. Now my pc just crashes when I try to access it via Windows. In Ubuntu I am able to view it through disk utility. SMART status is "DISK FAILURE IMMINENT". Fdisk doesn't work, and the SMART tests fail on "Reallocated Sector Count". Is there any way for me to rescue any of my data. I can still access the drive but as soon as I do that it crashes.

    Read the article

  • PLEASE HELP! Cannot login to server after file permissions change!

    - by John
    So, I did something really stupid. Please bear with me as I am new to ubuntu server. I ran chmod -R 700 / when I was logged in as root. Now when I try to login as my normal user I immediately get kicked out. Is there anyway to log back in to the server whether it is root or whoever so that I can change the permissions? or am I totally screwed? I dont think I have root access enabled in the /etc/ssh/sshd_config file. I do have physical access to the server. I really need some help here. Thanks in advance!

    Read the article

  • Help recovering broken OS (permissions issue)

    - by Guandalino
    (At the bottom there is an important update.) I was doing experiments in order to backup a remote account to my local system, Ubuntu 12.04 LTS. I'm not confident with duplicity and probably, due to wrong syntax, some local files have been replaced with remote files. This is just a supposition, I'm not sure this is the real cause of OS corruption. The corruption happened after experimenting with backups, so I think I did something wrong at this regard. I was aware there was a problem when I tried to access a command using sudo: $ sudo ls sudo: unable to open /etc/sudoers: Permission denied sudo: no valid sudoers sources found, quitting sudo: unable to initialize policy plugin This is how /etc/sudoers looks like: $ ls -ald /etc/sudoers -r--r----- 1 root root 788 Oct 2 18:30 /etc/sudoers At this point I tried to reboot and now this is the message I get: The system is running in low graphics mode. Your screen, graphics card and input device settings could not be detected correctly. You will need to configure these yourself. I tried to follow the wizard to configure these settings, but without luck (the system prevents me going on when I press "Next"). The thing that makes me a bit less worried is that all the data on the disk seems readable and I'm able to access them using a live cd. I run memtest and RAM seems to be OK. Do you have any idea about how to recover my system? I'm very glad to provide further information, just let me know what info could be helpful. UPDATE. The issue is about wrong permissions and this is how I discovered: I mounted the root partition of the broken OS on /mnt/broken/ (live CD) and did ls /mnt/broken/. I got a permission denied error, while I expected to have the directory listing. I had to do sudo ls /mnt/broken/ and this worked. Thus without having root permission via sudo it's impossible to access the root of broken os. The current output of ls -ld /mnt/broken/ is: drwxr-x--- 29 1000 812 4096 2012-12-08 21:58 /mnt/broken Any thoughts on how to restore the old (working) set of permissions?

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >